Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bayesianism for humans: prosaic priors

16 BT_Uytya 24 August 2014 11:14PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before. 
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain.This post is about the second penny.

Prosaic Priors

The second insight can be formulated as «the dull explanations are more likely to be correct because they tend to have high prior probability.»

Why is that? 

1) Almost by definition! Some property X is 'banal' if X applies to a lot of people in an disappointingly mundane way, not having any redeeming features which would make it more rare (and, hence, interesting).

In the other words, X is banal iff base rate of X is high. Or, you can say, prior probability of X is high.

1.5) Because of Occam's Razor and burdensome details. One way to make something boring more exciting is to add interesting details: some special features which will make sure that this explanation is about you as opposed to 'about almost anybody'.

This could work the other way around: sometimes the explanation feels unsatisfying exactly because it was shaved of any unnecessary and (ultimately) burdensome details.

2) Often, the alternative of a mundane explanation is something unique and custom made to fit the case you are interested in. And anybody familiar with overfitting and conjunction fallacy (and the fact that people tend to love coherent stories with blinding passion1) should be very suspicious about such things. So, there could be a strong bias against stale explanations, which should  be countered.

* * *

I fully grokked this when being in process of CBT-induced soul-searching; usage in this context still looks the most natural to me, but I believe that the area of application of this heuristic is wider.

Examples

1) I'm fairly confident that I'm an introvert. Still, sometimes I can behave like an extrovert. I was interested in the causes of this "extroversion activation", as I called it2. I suspected that I really had two modes of functioning (with "introversion" being the default one), and some events — for example, mutual interest (when I am interested in a person I was talking to, and xe is interested in me) or feeling high-status — made me switch between them.

Or, you know, it could be just reduction in a social anxiety, which makes people more communicative. Increased anxiety levels wasn't a new element to be postulated; I already knew I had it, yet I was tempted to make up new mental entities, and prosaic explanation about anxiety managed to avoid me for a while.

2) I find it hard to do something I consider worthwhile while on a spring break, despite having lots of a free time. I tend to make grandiose plans — I should meet new people! I should be more involved in sports! I should start using Anki! I should learn Lojban! I should practice meditation! I should read these textbooks including doing most of exercises! — and then fail to do almost anything. Yet I manage to do some impressive stuff during academic term, despite having less time and more commitments.

This paradoxical situation calls for explanation.

The first hypothesis that came to my mind was about activation energy. It takes effort to go  from "procrastinating" to "doing something"; speaking more generally, you can say that it takes effort to go from "lazy day" to "productive day". During the academic term, I am forced to make most of my days productive: I have to attend classes, do homework, etc. And, already having done something good, I can do something else as well. During spring break, I am deprived of that natural structure, and, hence I am on my own in terms of starting doing something I find worthwhile.

The alternative explanation: I was tired. Because, you know, vacation comes right after midterms, and I tend to go all out while preparing for midterms. I am exhausted, my energy and willpower are scarce, so it's no wonder I am having trouble utilizing it.

(I don't really believe in the latter explanation (I think that my situation is caused by several factors, including two outlined above), so it is also an example of descriptive "probable enough" hypothesis)

3) This example comes from Slate Star Codex. Nerds tend to find aversive many group bonding activities usual people supposedly enjoy, such as patriotism, prayer, team sports, and pep rallies. Supposedly, they should feel (with a tear-jerking passion of thousand exploding suns) the great unity with their fellow citizens, church-goers, teammates or pupils respectively, but instead they feel nothing.

Might it be that nerds are unable to enjoy these activities because something is broken inside their brains? One could be tempted to construct an elaborate argument involving autism spectrum and a mild case of schizoid personality disorder. In other words, this calls for postulating a rare form of autism which affects only some types of social behaviour (perception of group activities), leaving other types unchanged.

Or, you know, maybe nerds just don't like the group they are supposed to root for. Maybe nerds don't feel unity and relationship to The Great Whole because they don't feel like they truly belong here.

As Scott put it, "It’s not that we lack the ability to lose ourselves in an in-group, it’s that all the groups people expected us to lose ourselves in weren’t ones we could imagine as our in-group by any stretch of the imagination"3.

4) This example comes from this short comic titled "Sherlock Holmes in real life".

* * *

...and after this the word "prosaic" quickly turned into an awesome compliment. Like, "so, this hypothesis explains my behaviour well; but is it boring enough?", or "your claim is refreshingly dull; I like it!".


1. If you had read Thinking: Fast and Slow, you probably know what I mean. If you hadn't, you can look at narrative fallacy in order to get a general idea.
2. Which was, as I now realize, an excellent way to deceive myself via using word with a lot of hidden assumptions. Taboo your words, folks!
3. As a side note, my friend proposed an alternative explanation: the thing is, often nerds are defined as "sort of people who dislike pep rallies". So, naturally, we have "usual people" who like pep rallies and "nerds" who avoid them. And then "nerds dislike pep rallies" is tautology rather than something to be explained.

What is the difference between rationality and intelligence?

10 Wei_Dai 13 August 2014 11:19AM

Or to ask the question another way, is there such a thing as a theory of bounded rationality, and if so, is it the same thing as a theory of general intelligence?

The LW Wiki defines general intelligence as "ability to efficiently achieve goals in a wide range of domains", while instrumental rationality is defined as "the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences". These definitions seem to suggest that rationality and intelligence are fundamentally the same concept.

However, rationality and AI have separate research communities. This seems to be mainly for historical reasons, because people studying rationality started with theories of unbounded rationality (i.e., with logical omniscience or access to unlimited computing resources), whereas AI researchers started off trying to achieve modest goals in narrow domains with very limited computing resources. However rationality researchers are trying to find theories of bounded rationality, while people working on AI are trying to achieve more general goals with access to greater amounts of computing power, so the distinction may disappear if the two sides end up meeting in the middle.

We also distinguish between rationality and intelligence when talking about humans. I understand the former as the ability of someone to overcome various biases, which seems to consist of a set of skills that can be learned, while the latter is a kind of mental firepower measured by IQ tests. This seems to suggest another possibility. Maybe (as Robin Hanson recently argued on his blog) there is no such thing as a simple theory of how to optimally achieve arbitrary goals using limited computing power. In this view, general intelligence requires cooperation between many specialized modules containing domain specific knowledge, so "rationality" would just be one module amongst many, which tries to find and correct systematic deviations from ideal (unbounded) rationality caused by the other modules.

I was more confused when I started writing this post, but now I seem to have largely answered my own question (modulo the uncertainty about the nature of intelligence mentioned above). However I'm still interested to know how others would answer it. Do we have the same understanding of what "rationality" and "intelligence" mean, and know what distinction someone is trying to draw when they use one of these words instead of the other?

ETA: To clarify, I'm asking about the difference between general intelligence and rationality as theoretical concepts that apply to all agents. Human rationality vs intelligence may give us a clue to that answer, but isn't the main thing that I'm interested here.

Moloch: optimisation, "and" vs "or", information, and sacrificial ems

19 Stuart_Armstrong 06 August 2014 03:57PM

Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.

Go read it.

Don't worry, I can wait. I'm only a piece of text, my patience is infinite.

De-dum, de-dum.

You sure you've read it?

Ok, I believe you...

Really.

I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?

Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.

 

Academic Moloch

Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").

The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.

One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.

Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:

Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don't know about the pesticide, and the government hasn't caught up to regulating it yet. Now there's a tiny uncoupling between "selling to [customers]" and "satisfying [customers'] values", and so of course [customers'] values get thrown under the bus.

This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.

Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.

 

The point of the post

The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.

Raven paradox settled to my satisfaction

8 Manfred 06 August 2014 02:46AM

The raven paradox, originated by Carl Gustav Hempel, is an apparent absurdity of inductive reasoning. Consider the hypothesis:

H1: All ravens are black.

Inductively, one might expect that seeing many black ravens and no non-black ones is evidence for this hypothesis. As you see more black ravens, you may even find it more and more likely.

Logically, a statement is equivalent to its contrapositive (where you negate both things and flip the order). Thus if "if it is a raven, it is black" is true, so is:

H1': If it is not black, it is not a raven.

Take a moment to double-check this.

Inductively, just like with H1, one would expect that seeing many non-black non-ravens is evidence for this hypothesis. As you see more and more examples, you may even find it more and more likely. Thus a yellow banana is evidence for the hypothesis "all ravens are black."

Since this is silly, there is an apparent problem with induction.

 

Resolution

Consider the following two possible states of the world:

Either 100 black ravens, or 99 black 1 yellow

Suppose that these are your two hypotheses, and you observe a yellow banana (drawing from some fixed distribution over things). Q: What does this tell you about one hypothesis versus another? A: It tells you bananas-all about the number of black ravens.

One might contrast this with a hypothesis where there is one less banana, and one more yellow raven, by some sort of spontaneous generation.

Observations of both black ravens and yellow bananas cause us to prefer 1 over 3, now!

The moral of the story is that the amount of evidence that an observation provides is not just about whether it whether it is consistent with the "active" hypothesis - it is about the difference in likelihood between when the hypothesis is true versus when it's false.

This is a pretty straightforward moral - it's a widely known pillar of statistical reasoning. But its absence in the raven paradox takes a bit of effort to see. This is because we're using an implicit model of the problem (driven by some combination of outside knowledge and framing effects) where nonblack ravens replace black ravens, but don't replace bananas. The logical statements H1 and H1' are not alone enough to tell how you should update upon seeing new evidence. Or to put it another way, the version of induction that drives the raven paradox is in fact wrong, but probability theory implies a bigger version.

 

(Technical note: In the hypotheses above, the exact number of yellow bananas does not have to be the same for observing a yellow banana to provide no evidence - what has to be the same is the measure of yellow bananas in the probability distribution we're drawing from. Talking about "99 ravens" is more understandable, but what differentiates our hypotheses are really the likelihoods of observing different events [there's our moral again]. This becomes particularly important when extending the argument to infinite numbers of ravens - infinities or no infinities, when you make an observation you're still drawing from some distribution.)

Causal Inference Sequence Part II: Graphical Models

8 Anders_H 04 August 2014 11:10PM

(Part 2 of a Sequence on Applied Causal Inference. Follow-up to Part 1)

Saturated and Unsaturated Models

A model is a restriction on the possible states of the world: By specifying a model, you make a claim that you have knowledge about what the world does not look like.  

To illustrate this, if you have two binary predictors A and B, there are four groups defined by A and B, and four different values of E[Y|A,B].  Therefore, the regression E[Y|A,B] = β0 + β1*A  + β2*B + β3 * A * B  is not a real model :  There are four parameters and four values of E[Y|A,B], so the regression is saturated. In other words, the regression does not make any assumptions about the joint distribution of A, B and Y.   Running this regression in statistical software will simply give you exactly the same estimates as you would have obtained if you manually looked in each of the four groups defined by A and B, and estimated the mean of Y.

If we instead fit the regression model E[Y|A,B] = β0 + β1*A  + β2*B , we are making an assumption: We are assuming that there is no interaction between A and B on the average value of Y.  In contrast to the previous regression, this is a true model:   It makes the assumption that the value of β3 is 0.  In other words, we are saying that the data did not come from a distribution where βis not equal to 0. If this assumption is not true, the model is wrong:  We would have excluded the true state of the world

In general, whenever you use models, think first about what the saturated model looks like, and then add assumptions by asking what parameters  you can reasonably assume are equal to a specific value (such as zero). The same type of logic applies to graphical models such as directed acyclic graphs (DAGs).  

We will talk about two types of DAGs:   Statistical DAGs are models for the joint distribution of the variables on the graph, whereas Causal DAGs are a special class of DAGs which can be used as models for the data generating mechanism.  

Statistical DAGs

A Statistical DAG is a graph that allows you to encode modelling assumptions about the joint distribution of the individual variables. These graphs do not necessarily have any causal interpretation.  

On a Statistical DAG, we represent modelling assumptions by missing arrows. Those missing arrows define the DAG in the same way that the missing term for β3 defines the regression model above.  If there is a directed arrow between any two variables on the graph, the DAG is saturated or complete.  Complete DAGs make no modelling assumptions about the relationship between the variables, in the same way that a saturated regression model makes no modelling assumptions. 

DAG Factorization

The arrows on DAGs are statements about how the joint distribution factorizes. To illustrate, consider the following complete DAG (where each individual patient in our study represents a realization of the joint distribution of the variables A, B, C and D.  ):

 

 

 

 Any joint distribution of A,B,C and D can be factorized algebraically according to the laws of probability as  f(A,B,C,D) = f(D|C,B,A) * f(C|B,A) * f(B|A) * f(A).   This factorization is always true, it does not require any assumptions about independence.  By drawing a complete DAG, we are saying that we are not willing to make any further assumptions about how the distribution factorizes.  

Assumptions are represented by missing arrows: Every variable is assumed to be independent of the past, given its parents.  Now, consider the following DAG with three missing arrows:

 

 

 

 

 This DAG is defined by the assumption that C is independent of the joint distribution of A and B, and that D is independent B, given A and C.   If this assumption is true, the distribution can be factorized as f(A,B,C,D) = f(D|C, A) * f(C) * f(B|A) * f(A).    Unlike the factorization of the complete DAG, the above is not a tautology. It is the algebraic representation of the independence assumption that is represented by the missing arrows. The factorization is the modelling assumption:  When arrows are missing, you are really saying that you have a priori knowledge about how the distribution factorizes. 

 

D-Separation

When we make assumptions such as the ones that define a DAG, other independences may automatically follow as logical implications. The reason DAGs are useful, is that you can use the graphs as a tool for reasoning about what independence statements are logical implications of the modelling assumptions. You could reason about this using algebra, but it is usually much harder.  D-Separation is a simple graphical criterion that gives you an immediate answer to whether a particular statement about independence is a logical implication of the independences that define your model.

Two variables are independent (in all distributions that are consistent with the DAG) if there is no open path between them.  This is called «d-separation».  D-Separation is useful because it allows us to determine if a particular independence statement is true within our model.   For example, if we want to know if A is independent of B given C, we check if A is d-separated from B on the graph where C is conditioned on

A path between A and B is any set of edges that connect the two variables. For determining whether a path exists, the direction of the arrows does not matter:  A-->B-->C and A-->B<--C are both examples of paths between A and C.    Using the rules of D-separation, you can determine whether paths are open or closed.  

 

The Rules of D-Separation

Colliders:

If you are considering three variables, they can be connected in four different ways:

 A --> B --> C

 A <-- B <-- C

 A <-- B --> C

 A --> B <-- C

 

  • In the first three cases, B is a non-collider.
  • In the fourth case,  B is a collider: The arrows from A and C "collide" in B.
  • Non-Colliders are (normally) open, whereas colliders are (normally) closed
  • Colliders are defined relative to a specific pathway.  B could be a collider on one pathway, and a non-collider on another pathway 

 

Conditioning:

If we compare individuals within levels of a covariate, that covariate is conditioned on.  In an empirical study, this can happen either by design, or by accident. On a graph, we represent “conditioning” by drawing a box around that variable.  This is equivalent to introducing the variable behind the conditioning sign in the algebraic notation

 

  • If a non-collider is conditioned on, it becomes closed.
  • If a collider is conditioned on, it is opened.
  • If the descendent of a collider is conditioned on, the collider is opened

 

Open and Closed Paths:

 

  • A path is open if and only if all variables on the path are open. 
  • Two variables are D-separated if and only if there is no open path between them
  • Two variables are D-separated conditional on a third variable if and only if there is no open path between them on a graph where the third variable has been conditioned on.

 

Colliders:

Many students who first encounter D-separation are confused about why conditioning on a collider opens it.  Pearl uses the following thought experiment to illustrate what is going on:

Imagine you live in a world where there is a sprinkler that sometimes randomly turns on, regardless of the weather. In this world, whether the sprinkler is on is independent of rain:  If you notice that the sprinkler is on, this gives you no information about whether it rains. 

However, if the sprinkler is on, it will cause the grass to be wet. The same thing happens if it rains. Therefore, the grass being wet is a collider.   Now imagine that you have noticed that the grass is wet.  You also notice that the sprinkler is turned "off".  In this situation, because you have conditioned on the grass being wet,  the fact that the sprinkler is off allows you to conclude that it is probably raining.  

Faithfulness

D-Separation says that if there is no open pathway between two variables, those variables are independent (in all distributions that factorize according to the DAG, ie, in all distributions where the defining independences hold).  This immediately raises the question about whether the logic also runs in the opposite direction:  If there is an open pathway between two variables, does that mean that they are correlated?

The quick answer is that this does not hold, at least not without additional assumptions.   DAGs are defined by assumptions that are represented by the missing arrows:  Any joint distribution where those independences hold, can be represented by the DAG, even if there are additional independences that are not encoded.   However, we usually think about two variables as correlated if they are connected:  This assumption is called faithfulness

Causal DAGs

Causal DAGs are models for the data generating mechanism. The rules that apply to statistical DAGs - such as d-separation - are also valid for Causal DAGs.  If a DAG is «causal», we are simply making the following additional assumptions: 

  • The variables are in temporal (causal) order
  • If two variables on the DAG share a common cause, the common cause is also shown on the graph

If you are willing to make these assumptions, you can think of the Causal DAG as a map of the data generating mechanism. You can read the map as saying that all variables are generated by random processes with a deterministic component that depends only on the parents.  

For example,  if variable Y has two parents A and U, the model says that Ya =  f(A, U, *) where * is a random error term.   The shape of the function f is left completely unspecified, hence the name "non-parametric structural equations model".   The primary assumption in the model is that the error terms on different variables are independent.  

You can also think informally of the arrows as the causal effect of one variable on another:  If we change the value of A, this change would propagate to downstream variables, but not to variables that are not downstream.

Recall that DAGs are useful for reasoning about independences.  Exchangeability assumptions are a special type of independence statements: They involve counterfactual variables.  Counterfactual variables belong in the data generating mechanism, therefore, to reason about them, we will need Causal DAGs.  

A simplified heuristic for thinking about Causal DAGs is as follows:   Correlation flows through any open pathway, but causation flows only in the forward direction.  If you are interested in estimating the causal effect of A on Y, you have to quantify the sum of all forward-going pathways from A to Y.   Any open pathway from A to Y which contains an arrow in the backwards direction will cause bias. 

In the next part in this sequence (which I hope to post next week), I will give a more detailed description of how we can use Causal DAGs to reason about bias in observational research, including confounding bias, selection bias and mismeasurement bias. 

(Feedback is greatly appreciated:  I invoke Crocker's rules.  The most important types of feedback will be if you notice anything that is wrong or misleading.  I also greatly appreciate feedback on whether the structure of the text works, whether the sentences are hard to parse and whether there is any background information that needs to be included)


Me and M&Ms

11 coyotespike 02 August 2014 07:06PM

Ah, delicious dark chocolate M&Ms, colorfully filling a glass jar with your goodness. How do I love thee? About four of you an hour. Here's a brief rundown of my most recent motivation hacking experiment. 

1. Gwern has an interesting article arguing that Massive Open Online Courses (MOOCs) may shift the learning advantage from intelligence toward conscientiousness (actually he's not sure about the intelligence part). This shift occurs because MOOCs select for higher-quality instruction and better feedback, broadly speaking and over time, but it's much harder to stay on task without a malevolent instructor and bad grades breathing down your neck. This thesis jives with my own experience; if I get stuck on a math problem, I just google "an intuitive approach to x," and I usually find a couple of people begging to teach me the concept. But it's harder to get started and to stay focused than in a classroom.

2. Given that knowledge compounds and grants increasing advantages, I'd really like to keep taking advantage of MOOCs. Some MOOCs are better than others, but many are better than your standard college course - and they're free. For a non-technical guy getting technical, like me, it's a golden age of education. So, it would be great if I were highly conscientious. Gwern points out that conscientiousness is a relatively stable Big Five personality trait.

3. The question then becomes, can conscientiousness be developed? Well, I'm not a Cartesian agent, so wouldn't it make sense to reward myself for conscientiousness? Enter the M&Ms. I set a daily target for pomodoros. When I finish a pomodoro, I get a big peanut M&M or two small ones. If I finish two in a row, I get two servings, and so on. In this way, I encourage myself to get started, and then to keep going to build Deep Focus. Each pomodoro becomes cause for celebration, and I find my rapid progress through pomodoros (and chocolate) energizing, where long periods of distraction were tiring.

This has worked fantastically well for the last two weeks. I hit my pomodoro target for paid work, then switch to educational work. I plan to keep it up, and maybe I'll use chocolate as motivation somewhere else as well. Now back to my M&Ms, green, yellow, blue, orange, brown, red . . . 

Maybe we're not doomed

9 Manfred 02 August 2014 03:22PM

This is prompted by Scott's excellent article, Meditations on Moloch.

I might caricature (grossly unfairly) his post like this:

  1. Map some central problems for humanity onto the tragedy of the commons.
  2. Game theory says we're doomed.
Of course my life is pretty nice right now. But, goes the story, this is just a non-equilibrium starting period. We're inexorably progressing towards a miserable Nash equilibrium, and once we get there we'll be doomed forever. (This forever loses a bit of foreverness if one expects everything to get interrupted by self-improving AI, but let's elide that.)

There are a few ways we might not be doomed. The first and less likely is that people will just decide not to go to their doom, even though it's the Nash equilibrium. To give a totally crazy example, suppose there were two countries playing a game where the first one to launch missiles had a huge advantage. And neither country trusts the other, and there are multiple false alarms - thus pushing the situation to the stable Nash equilibrium of both countries trying to launch first. Except imagine that somehow, through some heroic spasm of insanity, these two countries just decided not to nuke each other. That's the sort of thing it would take.

Of course, people are rarely able to be that insane, so success that way should not be counted on. But on the other hand, if we're doomed forever such events will eventually occur - like a bubble of spontaneous low entropy spawning intelligent life in a steady-state universe.

The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can't share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is 'doom,' each player has an incentive to change the game.

Scott devotes a sub-argument to why we're still doomed to things be miserable if we solve coordination problems with government:
  1. Incentives for government employees sometimes don't match the needs of the people.
  2. This has costs, and those costs help explain why some things that suck, suck.
I agree with this, but not all governments are equally costly as coordination technologies. Heck, not all governments even are a technology for improving peoples' lives - look at North Korea. My point is that there's no particular reason that costs can't be small, with sufficiently advanced cultural technology.

More interesting to me than government is the idea of iterating a game to to encourage cooperation. In the normal prisoner's dilemma game, the only Nash equilibrium is defect-defect and so the prisoners are doomed. But if you have to play the prisoner's dilemma game repeatedly, with a variety of other players, the best strategy turns out to be a largely cooperative one. This evasion of doom gives every player an incentive to try and replace one-shot dilemmas with iterated ones. Could Scott's post look like this?
  1. Map some central problems for humanity onto the iterated prisoner's dilemma.
  2. Evolutionary game theory says we're not doomed.
In short, I think this idea of "if you know the Nash equilibrium sucks, everyone will help you change the game" is an important one. Though given human irrationality, game-theoretic predictions (whether of eventual doom or non-doom) should be taken less than literally.

How to treat problems of unknown difficulty

12 owencb 30 July 2014 11:27AM

Crossposted from the Global Priorities Project

This is the first in a series of posts which take aim at the question: how should we prioritise work on problems where we have very little idea of our chances of success. In this post we’ll see some simple models-from-ignorance which allow us to produce some estimates of the chances of success from extra work. In later posts we’ll examine the counterfactuals to estimate the value of the work. For those who prefer a different medium, I gave a talk on this topic at the Good Done Right conference in Oxford this July.

Introduction

How hard is it to build an economically efficient fusion reactor? How hard is it to prove or disprove the Goldbach conjecture? How hard is it to produce a machine superintelligence? How hard is it to write down a concrete description of our values?

These are all hard problems, but we don’t even have a good idea of just how hard they are, even to an order of magnitude. This is in contrast to a problem like giving a laptop to every child, where we know that it’s hard but we could produce a fairly good estimate of how much resources it would take.

Since we need to make choices about how to prioritise between work on different problems, this is clearly an important issue. We can prioritise using benefit-cost analysis, choosing the projects with the highest ratio of future benefits to present costs. When we don’t know how hard a problem is, though, our ignorance makes the size of the costs unclear, and so the analysis is harder to perform. Since we make decisions anyway, we are implicitly making some judgements about when work on these projects is worthwhile, but we may be making mistakes.

In this article, we’ll explore practical epistemology for dealing with these problems of unknown difficulty.

Definition

We will use a simplifying model for problems: that they have a critical threshold D such that the problem will be completely solved when D resources are expended, and not at all before that. We refer to this as the difficulty of the problem. After the fact the graph of success with resources will look something like this:

Of course the assumption is that we don’t know D. So our uncertainty about where the threshold is will smooth out the curve in expectation. Our expectation beforehand for success with resources will end up looking something like this:

Assuming a fixed difficulty is a simplification, since of course resources are not all homogenous, and we may get lucky or unlucky. I believe that this is a reasonable simplification, and that taking these considerations into account would not change our expectations by much, but I plan to explore this more carefully in a future post.

What kind of problems are we looking at?

We’re interested in one-off problems where we have a lot of uncertainty about the difficulty. That is, the kind of problem we only need to solve once (answering a question a first time can be Herculean; answering it a second time is trivial), and which may not easily be placed in a reference class with other tasks of similar difficulty. Knowledge problems, as in research, are a central example: they boil down to finding the answer to a question. The category might also include trying to effect some systemic change (for example by political lobbying).

This is in contrast to engineering problems which can be reduced down, roughly, to performing a known task many times. Then we get a fairly good picture of how the problem scales. Note that this includes some knowledge work: the “known task” may actually be different each time. For example, proofreading two pages of text is quite the same, but we have a fairly good reference class so we can estimate moderately well the difficulty of proofreading a page of text, and quite well the difficulty of proofreading a 100,000-word book (where the length helps to smooth out the variance in estimates of individual pages).

Some knowledge questions can naturally be broken up into smaller sub-questions. However these typically won’t be a tight enough class that we can use this to estimate the difficulty of the overall problem from the difficult of the first few sub-questions. It may well be that one of the sub-questions carries essentially all of the difficulty, so making progress on the others is only a very small help.

Model from extreme ignorance

One approach to estimating the difficulty of a problem is to assume that we understand essentially nothing about it. If we are completely ignorant, we have no information about the scale of the difficulty, so we want a scale-free prior. This determines that the prior obeys a power law. Then, we update on the amount of resources we have already expended on the problem without success. Our posterior probability distribution for how many resources are required to solve the problem will then be a Pareto distribution. (Fallenstein and Mennen proposed this model for the difficulty of the problem of making a general-purpose artificial intelligence.)

There is still a question about the shape parameter of the Pareto distribution, which governs how thick the tail is. It is hard to see how to infer this from a priori reasons, but we might hope to estimate it by generalising from a very broad class of problems people have successfully solved in the past.

This idealised case is a good starting point, but in actual cases, our estimate may be wider or narrower than this. Narrower if either we have some idea of a reasonable (if very approximate) reference class for the problem, or we have some idea of the rate of progress made towards the solution. For example, assuming a Pareto distribution implies that there’s always a nontrivial chance of solving the problem at any minute, and we may be confident that we are not that close to solving it. Broader because a Pareto distribution implies that the problem is certainly solvable, and some problems will turn out to be impossible.

This might lead people to criticise the idea of using a Pareto distribution. If they have enough extra information that they don’t think their beliefs represent a Pareto distribution, can we still say anything sensible?

Reasoning about broader classes of model

In the previous section, we looked at a very specific and explicit model. Now we take a step back. We assume that people will have complicated enough priors and enough minor sources of evidence that it will in practice be impossible to write down a true distribution for their beliefs. Instead we will reason about some properties that this true distribution should have.

The cases we are interested in are cases where we do not have a good idea of the order of magnitude of the difficulty of a task. This is an imprecise condition, but we might think of it as meaning something like:

There is no difficulty X such that we believe the probability of D lying between X and 10X is more than 30%.

Here the “30%” figure can be adjusted up for a less stringent requirement of uncertainty, or down for a more stringent one.

Now consider what our subjective probability distribution might look like, where difficulty lies on a logarithmic scale. Our high level of uncertainty will smooth things out, so it is likely to be a reasonably smooth curve. Unless we have specific distinct ideas for how the task is likely to be completed, this curve will probably be unimodal. Finally, since we are unsure even of the order of magnitude, the curve cannot be too tight on the log scale.

Note that this should be our prior subjective probability distribution: we are gauging how hard we would have thought it was before embarking on the project. We’ll discuss below how to update this in the light of information gained by working on it.

The distribution might look something like this:

In some cases it is probably worth trying to construct an explicit approximation of this curve. However, this could be quite labour-intensive, and we usually have uncertainty even about our uncertainty, so we will not be entirely confident with what we end up with.

Instead, we could ask what properties tend to hold for this kind of probability distribution. For example, one well-known phenomenon which is roughly true of these distributions but not all probability distributions is Benford’s law.

Approximating as locally log-uniform

It would sometimes be useful to be able to make a simple analytically tractable approximation to the curve. This could be faster to produce, and easily used in a wider range of further analyses than an explicit attempt to model the curve exactly.

As a candidate for this role, we propose working with the assumption that the distribution is locally flat. This corresponds to being log-uniform. The smoothness assumptions we made should mean that our curve is nowhere too far from flat. Moreover, it is a very easy assumption to work with, since it means that the expected returns scale logarithmically with the resources put in: in expectation, a doubling of the resources is equally good regardless of the starting point.

It is, unfortunately, never exactly true. Although our curves may be approximately flat, they cannot be everywhere flat -- this can’t even give a probability distribution! But it may work reasonably as a model of local behaviour. If we want to turn it into a probability distribution, we can do this by estimating the plausible ranges of D and assuming it is uniform across this scale. In our example we would be approximating the blue curve by something like this red box:

Obviously in the example the red box is not a fantastic approximation. But nor is it a terrible one. Over the central range, it is never out from the true value by much more than a factor of 2. While crude, this could still represent a substantial improvement on the current state of some of our estimates. A big advantage is that it is easily analytically tractable, so it will be quick to work with. In the rest of this post we’ll explore the consequences of this assumption.

Places this might fail

In some circumstances, we might expect high uncertainty over difficulty without everywhere having local log-returns. A key example is if we have bounds on the difficulty at one or both ends.

For example, if we are interested in X, which comprises a task of radically unknown difficulty plus a repetitive and predictable part of difficulty 1000, then our distribution of beliefs of the difficulty about X will only include values above 1000, and may be quite clustered there (so not even approximately logarithmic returns). The behaviour in the positive tail might still be roughly logarithmic.

In the other direction, we may know that there is a slow and repetitive way to achieve X, with difficulty 100,000. We are unsure whether there could be a quicker way. In this case our distribution will be uncertain over difficulties up to around 100,000, then have a spike. This will give the reverse behaviour, with roughly logarithmic expected returns in the negative tail, and a different behaviour around the spike at the upper end of the distribution.

In some sense each of these is diverging from the idea that we are very ignorant about the difficulty of the problem, but it may be useful to see how the conclusions vary with the assumptions.

Implications for expected returns

What does this model tell us about the expected returns from putting resources into trying to solve the problem?

Under the assumption that the prior is locally log-uniform, the full value is realised over the width of the box in the diagram. This is w = log(y) - log(x), where x is the value at the start of the box (where the problem could first be plausibly solved), y is the value at the end of the box, and our logarithms are natural. Since it’s a probability distribution, the height of the box is 1/w.

For any z between x and y, the modelled chance of success from investing z resources is equal to the fraction of the box which has been covered by that point. That is:

(1) Chance of success before reaching z resources = log(z/x)/log(y/x).

So while we are in the relevant range, the chance of success is equal for any doubling of the total resources. We could say that we expect logarithmic returns on investing resources.

Marginal returns

Sometimes of greater relevance to our decisions is the marginal chance of success from adding an extra unit of resources at z. This is given by the derivative of Equation (1):

(2) Chance of success from a marginal unit of resource at z = 1/zw.

So far, we’ve just been looking at estimating the prior probabilities -- before we start work on the problem. Of course when we start work we generally get more information. In particular, if we would have been able to recognise success, and we have invested z resources without observing success, then we learn that the difficulty is at least z. We must update our probability distribution to account for this. In some cases we will have relatively little information beyond the fact that we haven’t succeeded yet. In that case the update will just be to curtail the distribution to the left of z and renormalise, looking roughly like this:

Again the blue curve represents our true subjective probability distribution, and the red box represents a simple model approximating this. Now the simple model gives slightly higher estimated chance of success from an extra marginal unit of resources:

(3) Chance of success from an extra unit of resources after z = 1/(z*(ln(y)-ln(z))).

Of course in practice we often will update more. Even if we don’t have a good idea of how hard fusion is, we can reasonably assign close to zero probability that an extra $100 today will solve the problem today, because we can see enough to know that the solution won’t be found imminently. This looks like it might present problems for this approach. However, the truly decision-relevant question is about the counterfactual impact of extra resource investment. The region where we can see little chance of success has a much smaller effect on that calculation, which we discuss below.

Comparison with returns from a Pareto distribution

We mentioned that one natural model of such a process is as a Pareto distribution. If we have a Pareto distribution with shape parameter α, and we have so far invested z resources without success, then we get:

(4) Chance of success from an extra unit of resources = α/z.

This is broadly in line with equation (3). In both cases the key term is a factor of 1/z. In each case there is also an additional factor, representing roughly how hard the problem is. In the case of the log-linear box, this depends on estimating an upper bound for the difficulty of the problem; in the case of the Pareto distribution it is handled by the shape parameter. It may be easier to introspect and extract a sensible estimate for the width of the box than for the shape parameter, since it is couched more in terms that we naturally understand.

Further work

In this post, we’ve just explored a simple model for the basic question of how likely success is at various stages. Of course it should not be used blindly, as you may often have more information than is incorporated into the model, but it represents a starting point if you don't know where to begin, and it gives us something explicit which we can discuss, critique, and refine.

In future posts, I plan to:

  • Explore what happens in a field of related problems (such as a research field), and explain why we might expect to see logarithmic returns ex post as well as ex ante.
    • Look at some examples of this behaviour in the real world.
  • Examine the counterfactual impact of investing resources working on these problems, since this is the standard we should be using to prioritise.
  • Apply the framework to some questions of interest, with worked proof-of-concept calculations.
  • Consider what happens if we relax some of the assumptions or take different models.

[link] Why Psychologists' Food Fight Matters

28 Pablo_Stafforini 01 August 2014 07:52AM

Why Psychologists’ Food Fight Matters: Important findings” haven’t been replicated, and science may have to change its ways. By Michelle N. Meyer and Christopher Chabris. Slate, July 31, 2014. [Via Steven Pinker's Twitter account, who adds: "Lesson for sci journalists: Stop reporting single studies, no matter how sexy (these are probably false). Report lit reviews, meta-analyses."]  Some excerpts:

Psychologists are up in arms over, of all things, the editorial process that led to the recent publication of a special issue of the journal Social Psychology. This may seem like a classic case of ivory tower navel gazing, but its impact extends far beyond academia. The issue attempts to replicate 27 “important findings in social psychology.” Replication—repeating an experiment as closely as possible to see whether you get the same results—is a cornerstone of the scientific method. Replication of experiments is vital not only because it can detect the rare cases of outright fraud, but also because it guards against uncritical acceptance of findings that were actually inadvertent false positives, helps researchers refine experimental techniques, and affirms the existence of new facts that scientific theories must be able to explain.

One of the articles in the special issue reported a failure to replicate a widely publicized 2008 study by Simone Schnall, now tenured at Cambridge University, and her colleagues. In the original study, two experiments measured the effects of people’s thoughts or feelings of cleanliness on the harshness of their moral judgments. In the first experiment, 40 undergraduates were asked to unscramble sentences, with one-half assigned words related to cleanliness (like pure or pristine) and one-half assigned neutral words. In the second experiment, 43 undergraduates watched the truly revolting bathroom scene from the movie Trainspotting, after which one-half were told to wash their hands while the other one-half were not. All subjects in both experiments were then asked to rate the moral wrongness of six hypothetical scenarios, such as falsifying one’s résumé and keeping money from a lost wallet. The researchers found that priming subjects to think about cleanliness had a “substantial” effect on moral judgment: The hand washers and those who unscrambled sentences related to cleanliness judged the scenarios to be less morally wrong than did the other subjects. The implication was that people who feel relatively pure themselves are—without realizing it—less troubled by others’ impurities. The paper was covered by ABC News, the Economist, and the Huffington Post, among other outlets, and has been cited nearly 200 times in the scientific literature.

However, the replicators—David Johnson, Felix Cheung, and Brent Donnellan (two graduate students and their adviser) of Michigan State University—found no such difference, despite testing about four times more subjects than the original studies. [...]

The editor in chief of Social Psychology later agreed to devote a follow-up print issue to responses by the original authors and rejoinders by the replicators, but as Schnall told Science, the entire process made her feel “like a criminal suspect who has no right to a defense and there is no way to win.” The Science article covering the special issue was titled “Replication Effort Provokes Praise—and ‘Bullying’ Charges.” Both there and in her blog post, Schnall said that her work had been “defamed,” endangering both her reputation and her ability to win grants. She feared that by the time her formal response was published, the conversation might have moved on, and her comments would get little attention.

How wrong she was. In countless tweets, Facebook comments, and blog posts, several social psychologists seized upon Schnall’s blog post as a cri de coeur against the rising influence of “replication bullies,” “false positive police,” and “data detectives.” For “speaking truth to power,” Schnall was compared to Rosa Parks. The “replication police” were described as “shameless little bullies,” “self-righteous, self-appointed sheriffs” engaged in a process “clearly not designed to find truth,” “second stringers” who were incapable of making novel contributions of their own to the literature, and—most succinctly—“assholes.” Meanwhile, other commenters stated or strongly implied that Schnall and other original authors whose work fails to replicate had used questionable research practices to achieve sexy, publishable findings. At one point, these insinuations were met with threats of legal action. [...]

Unfortunately, published replications have been distressingly rare in psychology. A 2012 survey of the top 100 psychology journals found that barely 1 percent of papers published since 1900 were purely attempts to reproduce previous findings. Some of the most prestigious journals have maintained explicit policies against replication efforts; for example, the Journal of Personality and Social Psychology published a paper purporting to support the existence of ESP-like “precognition,” but would not publish papers that failed to replicate that (or any other) discovery. Science publishes “technical comments” on its own articles, but only if they are submitted within three months of the original publication, which leaves little time to conduct and document a replication attempt.

The “replication crisis” is not at all unique to social psychology, to psychological science, or even to the social sciences. As Stanford epidemiologist John Ioannidis famously argued almost a decade ago, “Most research findings are false for most research designs and for most fields.” Failures to replicate and other major flaws in published research have since been noted throughout science, including in cancer research, research into the genetics of complex diseases like obesity and heart disease, stem cell research, and studies of the origins of the universe. Earlier this year, the National Institutes of Health stated “The complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.”

Given the stakes involved and its centrality to the scientific method, it may seem perplexing that replication is the exception rather than the rule. The reasons why are varied, but most come down to the perverse incentives driving research. Scientific journals typically view “positive” findings that announce a novel relationship or support a theoretical claim as more interesting than “negative” findings that say that things are unrelated or that a theory is not supported. The more surprising the positive finding, the better, even though surprising findings are statistically less likely to be accurate. Since journal publications are valuable academic currency, researchers—especially those early in their careers—have strong incentives to conduct original work rather than to replicate the findings of others. Replication efforts that do happen but fail to find the expected effect are usually filed away rather than published. That makes the scientific record look more robust and complete than it is—a phenomenon known as the “file drawer problem.”

The emphasis on positive findings may also partly explain the fact that when original studies are subjected to replication, so many turn out to be false positives. The near-universal preference for counterintuitive, positive findings gives researchers an incentive to manipulate their methods or poke around in their data until a positive finding crops up, a common practice known as “p-hacking” because it can result in p-values, or measures of statistical significance, that make the results look stronger, and therefore more believable, than they really are. [...]

The recent special issue of Social Psychology was an unprecedented collective effort by social psychologists to [rectify this situation]—by altering researchers’ and journal editors’ incentives in order to check the robustness of some of the most talked-about findings in their own field. Any researcher who wanted to conduct a replication was invited to preregister: Before collecting any data from subjects, they would submit a proposal detailing precisely how they would repeat the original study and how they would analyze the data. Proposals would be reviewed by other researchers, including the authors of the original studies, and once approved, the study’s results would be published no matter what. Preregistration of the study and analysis procedures should deter p-hacking, guaranteed publication should counteract the file drawer effect, and a requirement of large sample sizes should make it easier to detect small but statistically meaningful effects.

The results were sobering. At least 10 of the 27 “important findings” in social psychology were not replicated at all. In the social priming area, only one of seven replications succeeded. [...]

One way to keep things in perspective is to remember that scientific truth is created by the accretion of results over time, not by the splash of a single study. A single failure-to-replicate doesn’t necessarily invalidate a previously reported effect, much less imply fraud on the part of the original researcher—or the replicator. Researchers are most likely to fail to reproduce an effect for mundane reasons, such as insufficiently large sample sizes, innocent errors in procedure or data analysis, and subtle factors about the experimental setting or the subjects tested that alter the effect in question in ways not previously realized.

Caution about single studies should go both ways, though. Too often, a single original study is treated—by the media and even by many in the scientific community—as if it definitively establishes an effect. Publications like Harvard Business Review and idea conferences like TED, both major sources of “thought leadership” for managers and policymakers all over the world, emit a steady stream of these “stats and curiosities.” Presumably, the HBR editors and TED organizers believe this information to be true and actionable. But most novel results should be initially regarded with some skepticism, because they too may have resulted from unreported or unnoticed methodological quirks or errors. Everyone involved should focus their attention on developing a shared evidence base that consists of robust empirical regularities—findings that replicate not just once but routinely—rather than of clever one-off curiosities. [...]

Scholars, especially scientists, are supposed to be skeptical about received wisdom, develop their views based solely on evidence, and remain open to updating those views in light of changing evidence. But as psychologists know better than anyone, scientists are hardly free of human motives that can influence their work, consciously or unconsciously. It’s easy for scholars to become professionally or even personally invested in a hypothesis or conclusion. These biases are addressed partly through the peer review process, and partly through the marketplace of ideas—by letting researchers go where their interest or skepticism takes them, encouraging their methods, data, and results to be made as transparent as possible, and promoting discussion of differing views. The clashes between researchers of different theoretical persuasions that result from these exchanges should of course remain civil; but the exchanges themselves are a perfectly healthy part of the scientific enterprise.

This is part of the reason why we cannot agree with a more recent proposal by Kahneman, who had previously urged social priming researchers to put their house in order. He contributed an essay to the special issue of Social Psychology in which he proposed a rule—to be enforced by reviewers of replication proposals and manuscripts—that authors “be guaranteed a significant role in replications of their work.” Kahneman proposed a specific process by which replicators should consult with original authors, and told Science that in the special issue, “the consultations did not reach the level of author involvement that I recommend.”

Collaboration between opposing sides would probably avoid some ruffled feathers, and in some cases it could be productive in resolving disputes. With respect to the current controversy, given the potential impact of an entire journal issue on the robustness of “important findings,” and the clear desirability of buy-in by a large portion of psychology researchers, it would have been better for everyone if the original authors’ comments had been published alongside the replication papers, rather than left to appear afterward. But consultation or collaboration is not something replicators owe to original researchers, and a rule to require it would not be particularly good science policy.

Replicators have no obligation to routinely involve original authors because those authors are not the owners of their methods or results. By publishing their results, original authors state that they have sufficient confidence in them that they should be included in the scientific record. That record belongs to everyone. Anyone should be free to run any experiment, regardless of who ran it first, and to publish the results, whatever they are. [...]

some critics of replication drives have been too quick to suggest that replicators lack the subtle expertise to reproduce the original experiments. One prominent social psychologist has even argued that tacit methodological skill is such a large factor in getting experiments to work that failed replications have no value at all (since one can never know if the replicators really knew what they were doing, or knew all the tricks of the trade that the original researchers did), a surprising claim that drew sarcastic responses. [See LW discussion.] [...]

Psychology has long been a punching bag for critics of “soft science,” but the field is actually leading the way in tackling a problem that is endemic throughout science. The replication issue of Social Psychology is just one example. The Association for Psychological Science is pushing for better reporting standards and more study of research practices, and at its annual meeting in May in San Francisco, several sessions on replication were filled to overflowing. International collaborations of psychologists working on replications, such as the Reproducibility Project and the Many Labs Replication Project (which was responsible for 13 of the 27 replications published in the special issue of Social Psychology) are springing up.

Even the most tradition-bound journals are starting to change. The Journal of Personality and Social Psychology—the same journal that, in 2011, refused to even consider replication studies—recently announced that although replications are “not a central part of its mission,” it’s reversing this policy. We wish that JPSP would see replications as part of its central mission and not relegate them, as it has, to an online-only ghetto, but this is a remarkably nimble change for a 50-year-old publication. Other top journals, most notable among them Perspectives in Psychological Science, are devoting space to systematic replications and other confirmatory research. The leading journal in behavior genetics, a field that has been plagued by unreplicable claims that particular genes are associated with particular behaviors, has gone even further: It now refuses to publish original findings that do not include evidence of replication.

A final salutary change is an overdue shift of emphasis among psychologists toward establishing the size of effects, as opposed to disputing whether or not they exist. The very notion of “failure” and “success” in empirical research is urgently in need of refinement. When applied thoughtfully, this dichotomy can be useful shorthand (and we’ve used it here). But there are degrees of replication between success and failure, and these degrees matter.

For example, suppose an initial study of an experimental drug for cardiovascular disease suggests that it reduces the risk of heart attack by 50 percent compared to a placebo pill. The most meaningful question for follow-up studies is not the binary one of whether the drug’s effect is 50 percent or not (did the first study replicate?), but the continuous one of precisely how much the drug reduces heart attack risk. In larger subsequent studies, this number will almost inevitably drop below 50 percent, but if it remains above 0 percent for study after study, then the best message should be that the drug is in fact effective, not that the initial results “failed to replicate.”

Gaming Democracy

8 Froolow 30 July 2014 09:45AM

I live in the UK, which has a very similar voting structure to the US for the purposes of this article. Nevertheless, it may differ on the details, for which I am sorry. I also use a couple of real-life political examples which I hope are uncontroversial enough not to break the unofficial rules here. If they are not, I can change them, because this is a discussion of gaming democracy by exploiting swing seats to push rationalist causes.

Cory Doctrow writes in the Guardian about using Kickstarter-like thresholds to encourage voting for minority parties:

http://www.theguardian.com/technology/2014/jul/24/how-the-kickstarter-model-could-transform-uk-elections

He points out that nobody votes for minority parties because nobody else votes for them; if you waste your vote on Yellow then it is one fewer vote that might stop the hated Blue candidate getting in by voting for the not-quite-so-bad Green. He argues that you could use the internet to inform people when some pre-set threshold had been triggered with respect to voting for a minor party and thus encourage them to get out and vote. So for example if the margin of victory was 8000 votes and 9000 people agreed with the statement, “If more than 8000 people agree to this statement, then I will go to the polls on election day and vote for the minority Yellow party”, the minority Yellow party would win power even though none of the original 9000 participants would have voted for Yellow without the information-coordinating properties of the internet.

I’m not completely sure of the argument, but I looked into some of the numbers myself. There are 23 UK seats (roughly equivalent to Congressional Districts for US readers) with a margin of 500 votes or fewer. So to hold the balance of power in these seats you need to find either 500 non-voters who would be prepared to vote the way you tell them, or 250 voters with the same caveats (voters are worth twice as much as non-voters to the aspiring seat-swinger, since a vote taken from the Blues lowers the margin by one, and a vote given to the Greens lowers the margin by one, and every voter is entitled to both take a vote away from the party they are currently voting for and award a vote to any party of their choice). I’ll call the number of votes required to swing a seat the ‘effective voter’ count, which allows for the fact that some voters count for two.

It doesn’t sound impossible to me to reach the effective voter count for some swing constituencies, given that often even extremely obvious parody parties can often win back their deposit (500 actual votes, not even ‘effective votes’).

Doctrow wants to use the information co-ordination system to help minority parties reach a wider audience. I think it could be used in a much more active way to force policy promises on uncontroversial but low-status issues from potential future MPs. Let me take as an example ‘Research funding for transhuman causes’. Most people don’t know what transhumanism is, and most people who do know what it is don’t care. Most people who know what it is and care are basically in support of research into transhuman augmentations, but would definitely rank issues like the economy or defence as more important. There is a small constituency of people who oppose transhumanism outright, but they are not single issue voters either by any means (I imagine opposing transhumanism is strongly correlated with a ‘traditional religious value’ cluster which includes opposing abortion, gay marriage and immigration). Politicians could therefore (almost) costlessly support a small amount of research funding for transhuman, which would almost certainly be a sensible move when averaged across the whole country (either you discover something cool, in which case your population is made better off and your army more powerful or you don’t, and in the worst case you get a decent multiplier effect to the economy that comes from employing a load of material scientists and bioengineers). However we know that they won’t do this because while the benefits to the country might be great, the minor cost of supporting a low-status (‘weird’) project is borne entirely by the individual politician. What I mean by this is that the politician will probably not lose any votes by publically supporting transhumanism, but will lose status among their peers and will want to avoid this. There’s also a small risk of losing votes by supporting transhuman causes from the ‘traditional value’ cluster and no obvious demographic with whom supporting transhuman causes gains votes.

This indicates to me that if enough pro-transhumans successfully co-ordinated their action, they could bargain with the politicians standing for office. Let us say there are unequivocally enough transhumans to meet the effective voter threshold for a particular constituency. One person could go round each transhuman (maybe on that city’s subreddit) and get them to agree in principle to vote for whichever candidate will agree to always vote ‘Yes’ on research funding for transhuman causes, up to a maximum of £1bn. Each transhuman might have a weak preference for Blues vs Greens or vice versa, but the appeal is made to their sense of logic; each Blue vote is cancelled out by each Green vote, but each ‘Transhuman’ vote is a step closer to getting transhumanism properly funded, and transhumanism is more important than any marginal policy difference between the two parties. You then go to each candidate and present the evidence that the ‘transhuman’ block has the power to swing the election and is well co-ordinated enough to vote as a bloc on election day. If both candidates agree that they will vote ‘Yes’ on the bills you decided on, then send round an electronic message saying – essentially – “Vote your conscience”. If one candidate says ‘Yes’ and the other ‘No’ send round a message saying “Vote Blue” (or Green). If both candidates say ‘no’ send a message saying “Vote for the Transhuman Party (which is me)” in the hope that you can demonstrate you really did hold the balance of power, to increase the weight of your negotiation in the future.

If the candidate then goes back on their word, you slash and burn the constituency and make sure that no matter what the next candidate from that party promises, they lose. Also ensure that if that candidate ever stands in a marginal seat again, they lose (effectively ending their political career). This gives a strong incentive for MPs to vote the way they promised, and for parties to allow them to vote the way they promised.

Incidentally my preferred promise to extract from the candidates (and I don’t think this works in America) is to bring a bill with a particular wording if they win a Private Members’ Ballot (a system whereby junior members enter a lottery to see whose idea for a bill gets a ‘reading’ in the House of Commons, and hence a chance of becoming a law). For example, “This house would fund £1bn worth of transhumanism basic research over the next four years”. This is because it forces MPs to take a position on an issue they otherwise would not want to touch (because it is low-status) and one way out of this bind is to pretend the issue was high-status all along, which would be a good outcome for transhumanism as it means people might start funding it without the complicated information-coordination game I describe above.

One issue with this is that some groups – for example; Eurosceptics – are happy to single issue vote already, and there are far more Eurosceptics than there are rationalists in the UK. A US equivalent – as far as I understand – might be gun rights activists; they will vote for whatever party deregulates guns furthest, regardless of any other policies they might have and they are very numerous. This could be a problem, since a more numerous coalition will always beat a less numerous coalition at playing this information coordination game.

The first response is that it might actually be OK if this occurs. Being a Eurosceptic in no way implies a particular position on transhuman issues, so a politician could agree to the demands of the Eurosceptic bloc and transhuman bloc without issue. The numbers problem only occurs if a particular position automatically implies a position on another issue, so if there was a large single-issue anti-transhuman voting bloc, and there isn’t. There is a small problem if someone is both a Eurosceptic and a transhuman, since you can only categorically agree to vote the way one bloc tells you, but this is a personal issue where you have to decide which issue is more important and not a problem with the system as it stands.

The second response is that you are underestimating the difficulty of co-ordinating a vote in this way. For example, Eurosceptics – as a rule – will want to vote for the minority UKIP party to signal their affiliation with Eurosceptic issues. No matter what position the candidates agree to on Europe, UKIP will always be more extreme on European issues, since the candidate can only agree to sufficiently mainstream policies that the vote-cost of agreeing to the policy publically is less than the vote-gain of gaining the Eurosceptic bloc. Therefore there will be considerable temptation to defect and vote UKIP in the event of successfully coordinating a policy pledge from a candidate since the voter has a strong preference for UKIP over any other party. Transhumans – it is hypothesised – have a stronger preference for marginal gains in transhuman funding over any policy difference between the two major parties and so getting them to ‘hold their nose’ and vote for a candidate they would otherwise not want to is easier.

It is not just transhumanism that this vote-bloc scheme might work for, but transhumanism is certainly a good example. In my mind you could co-ordinate any issue where the proposed voting bloc is:

  1. Intelligent enough to understand why voting for a candidate you don’t like might result in outcomes you do like
  2. Sufficiently politically unaffiliated that voting for a party they disapprove of is a realistic prospect (hence I’m picking issues young people care about, since they typically don’t vote)
  3. Sufficiently internet-savvy that coordinating by email / reddit is a realistic prospect.
  4. Unopposed by any similar-sized or larger group which fits the above three criteria.
  5. Cares more about this particular issue than any other issue which fits the above four criteria

Some other good examples of this might be opposing homeopathy on the NHS, encouraging Effective Altruism in government foreign aid, spending a small portion of the Defence budget on FAI and so on.

Are there any glaring flaws I’ve missed?

View more: Next