(tl;dr: In this post, I show that prediction markets estimate non-causal probabilities, and can therefore not be used for decision making by rational agents following causal decision theory. I provide an example of a simple situation where such confounding leads to a society which has implemented futarchy making an incorrect decision)
It is October 2016, and the US Presidential Elections are nearing. The most powerful nation on earth is about to make a momentous decision about whether being the brother of a former president is a more impressive qualification than being the wife of a former president. However, one additional criterion has recently become relevant in light of current affairs: Kim Jong-Un, Great Leader of the Glorious Nation of North Korea, is making noise about his deep hatred for Hillary Clinton. He also occasionally discusses the possibility of nuking a major US city. The US electorate, desperate to avoid being nuked, have come up with an ingenious plan: They set up a prediction market to determine whether electing Hillary will impact the probability of a nuclear attack.
The following rules are stipulated: There are four possible outcomes, either "Hillary elected and US Nuked", "Hillary elected and US not nuked", "Jeb elected and US nuked", "Jeb elected and US not nuked". Participants in the market can buy and sell contracts for each of those outcomes, the contract which correponds to the actual outcome will expire at $100, all other contracts will expire at $0
Simultaneously in a country far, far away, a rebellion is brewing against the Great Leader. The potential challenger not only appears not to have no problem with Hillary, he also seems like a reasonable guy who would be unlikely to use nuclear weapons. It is generally believed that the challenger will take power with probability 3/7; and will be exposed and tortured in a forced labor camp for the rest of his miserable life with probability 4/7. Let us stipulate that this information is known to all participants - I am adding this clause in order to demonstrate that this argument does not rely on unknown information or information asymmetry.
A mysterious but trustworthy agent named "Laplace's Demon" has recently appeared, and informed everyone that, to a first approximation, the world is currently in one of seven possible quantum states. The Demon, being a perfect Bayesian reasoner with Solomonoff Priors, has determined that each of these states should be assigned probability 1/7. Knowledge of which state we are in will perfectly predict the future, with one important exception: It is possible for the US electorate to "Intervene" by changing whether Clinton or Bush is elected. This will then cause a ripple effect into all future events that depend on which candidate is elected President, but otherwise change nothing.
The Demon swears up and down that the choice about whether Hillary or Jeb is elected has absolutely no impact in any of the seven possible quantum states. However, because the Prediction market has already been set up and there are powerful people with vested interests, it is decided to run the market anyways.
Roughly, the demon tells you that the world is in one of the following seven states:
State |
Kim overthrown |
Election winner (if no intervention) |
US Nuked if Hillary elected |
US Nuked if Jeb elected |
US Nuked |
1 |
No |
Hillary |
Yes |
Yes |
Yes |
2 |
No |
Hillary |
No |
No |
No |
3 |
No |
Jeb |
Yes |
Yes |
Yes |
4 |
No |
Jeb |
No |
No |
No |
5 |
Yes |
Hillary |
No |
No |
No |
6 |
Yes |
Jeb |
No |
No |
No |
7 |
Yes |
Jeb |
No |
No |
No |
Let us use this table to define some probabilities: If one intervenes to make Hillary win the election, the probability of the US being nuked is 2/7 (this is seen from column 4). If one intervenes to make Jeb win the election, the probability of the US being nuked is 2/7 (this is seen from column 5). In the language of causal inference, these probabilities are Pr (Nuked| Do (Elect Clinton)] and Pr[Nuked | Do(Elect Bush)]. The fact that these two quantities are equal confirms the Demon’s claim that the choice of President has no effect on the outcome. An agent operating under Causal Decision theory will use this information to correctly conclude that he has no preference about whether to elect Hillary or Jeb.
However, if one were to condition on who actually was elected, we get different numbers: Conditional on being in a state where Hillary is elected, the probability of the US being nuked is 1/3; whereas conditional on being in a state where Jeb is elected, the probability of being nuked is ¼. Mathematically, these probabilities are Pr [Nuked | Clinton Elected] and Pr[Nuked | Bush Elected]. An agent operating under Evidentiary Decision theory will use this information to conclude that he will vote for Bush. Because evidentiary decision theory is wrong, he will fail to optimize for the outcome he is interested in.
Now, let us ask ourselves which probabilities our prediction markets will converge to, ie which probabilities participants in the market have an incentive to provide their best estimate of. We defined our contract as "Hillary is elected and the US is nuked". The probability of this occurring in 1/7; if we normalize by dividing by the marginal probability that Hillary is elected, we get 1/3 which is equal to Pr [Nuked | Clinton Elected]. In other words, the prediction market estimates the wrong quantities.
Essentially, what happens is structurally the same phenomenon as confounding in epidemiologic studies: There was a common cause of Hillary being elected and the US being nuked. This common cause - whether Kim Jong-Un was still Great Leader of North Korea - led to a correlation between the election of Hillary and the outcome, but that correlation is purely non-causal and not relevant to a rational decision maker.
The obvious next question is whether there exists a way to save futarchy; ie any way to give traders an incentive to pay a price that reflects their beliefs about Pr (Nuked| Do (Elect Clinton)] instead of Pr [Nuked | Clinton Elected]). We discussed this question at the Less Wrong Meetup in Boston a couple of months ago. The only way we agreed will definitely solve the problem is the following procedure:
- The governing body makes an absolute pre-commitment that no matter what happens, the next President will be determined solely on the basis of the prediction market
- The following contracts are listed: “The US is nuked if Hillary is elected” and “The US is nuked if Jeb is elected”
- At the pre-specified date, the markets are closed and the President is chosen based on the estimated probabilities
- If Hillary is chosen, the contract on Jeb cannot be settled, and all bets are reversed.
- The Hillary contract is expired when it is known whether Kim Jong-Un presses the button.
This procedure will get the correct results in theory, but it has the following practical problems: It allows maximizing on only one outcome metric (because one cannot precommit to choose the President based on criteria that could potentially be inconsistent with each other). Moreover, it requires the reversal of trades, which will be problematic if people who won money on the Jeb contract have withdrawn their winnings from the exchange.
The only other option I can think of in order to obtain causal information from a prediction market is to “control for confounding”. If, for instance, the only confounder is whether Kim Jong-Un is overthrown, we can control for it by using Do-Calculus to show that Pr (Nuked| Do (Elect Clinton)] = Pr (Nuked| (Clinton elected, Kim Overthrown)* Pr (Kim Overthrown) + Pr (Nuked| (Clinton elected, Kim Not Overthrown)* Pr (Kim Not Overthrown). All of these quantities can be estimated from separate prediction markets.
However, this is problematic for several reasons:
- There will be an exponential explosion in the number of required prediction markets, and each of them will ask participants to bet on complicated conditional probabilities that have no obvious causal interpretation.
- There may be disagreement on what the confounders are, which will lead to contested contract interpretations.
- The expert consensus on what the important confounders are may change during the lifetime of the contract, which will require the entire thing to be relisted. Etc. For practical reasons, therefore, this approach does not seem feasible.
I’d like a discussion on the following questions: Are there any other ways to list a contract that gives market participants an incentive to aggregate information on causal quantities? If not, is futarchy doomed?
(Thanks to the Less Wrong meetup in Boston and particularly Jimrandomh for clarifying my thinking on this issue)
Futarchy's can't distinguish between 'values' and 'beliefs'.
It takes domain knowledge and discovery research to realise which values can actually be reduced to belief.
For instance, someone might value 'healthcare', thinking that the associated beliefs are 'activity-costing' of health budgets on the departmental secretaries recommendations v.s. throwing it all into bednets (for an absurd but illustrative example).
In actual fact, the underlying value may not be healthcare depending on whether the person believes healthcare maximises some confounded higher order value - i.e. health.
However
It’s also strategic in an international context
Depending on what someone believes, they may or may not be trying to maximise for strategy!
I'm learning more here
First of all, I think it would be a good idea to avoid use of the word "confounding" unless you use it with its technical definition, ie, to discuss whether Pr(X|Y) = Pr(X| do(Y); or informally to describe the smoking lesion problem or Simpson's paradox. I don't think that is what you are referring to in this case.
I think what you're getting at is an example Goodhart's la... (read more)