Clarity comments on Prediction Markets are Confounded - Implications for the feasibility of Futarchy - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
Futarchy's can't distinguish between 'values' and 'beliefs'.
It takes domain knowledge and discovery research to realise which values can actually be reduced to belief.
For instance, someone might value 'healthcare', thinking that the associated beliefs are 'activity-costing' of health budgets on the departmental secretaries recommendations v.s. throwing it all into bednets (for an absurd but illustrative example).
In actual fact, the underlying value may not be healthcare depending on whether the person believes healthcare maximises some confounded higher order value - i.e. health.
However
It’s also strategic in an international context
Depending on what someone believes, they may or may not be trying to maximise for strategy!
I'm learning more here
First of all, I think it would be a good idea to avoid use of the word "confounding" unless you use it with its technical definition, ie, to discuss whether Pr(X|Y) = Pr(X| do(Y); or informally to describe the smoking lesion problem or Simpson's paradox. I don't think that is what you are referring to in this case.
I think what you're getting at is an example Goodhart's law. See for instance http://lesswrong.com/lw/1ws/the_importance_of_goodharts_law/
Certainly, if you use prediction markets with contracts on G* instead of G, people will bet based on their true beliefs about G* instead of their true beliefs about G. In this case, futarchy will end up optimizing for G* instead of G (assuming you can find a solution to the confounding problem). I don't disagree with this criticism of futarchy, but I'm not sure I see the relevance to my post