I suggest the alternative strategy of not having political beliefs at all in the name of combating privileging the question. Once you're in a position to actually influence policy, then maybe it makes sense to have opinions about policy.
Once you're in a position to actually influence policy, then maybe it makes sense to have opinions about policy.
How does anyone manage to acquire a position to actually influence policy? From what I can tell, people of my acquaintance who have done this, have begun with some opinions about policy ... and, indeed, have sought positions that accord with their preëxisting policy opinions.
If you only have political opinions for the status benefits, then why would you need to calibrate them?
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject.
This is dangerous, because people tend to use additional information primarily to support their existing opinion more than to improve it. See Motivated Skepticism in the Evaluation of Political Beliefs.
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject.
Academic papers are what get's published, not what's true. The difference is particularly pronounced for political topics.
you should be able to accurately restate your opponent’s position
There are limits. You can't accurately restate gibberish. You can mimic it as in a Turing test, but I don't see any criteria for accuracy.
I think the best you can do is identify the unstated assumptions. When you can get both sides to say &quo...
You can't accurately restate gibberish.
Good point. If someone appears to be emitting gibberish on a subject, but seems to be a reasonably functional member of society (i.e. is probably not floridly psychotic), and there's nothing about the structure of the subject that seems to license gibberish (e.g. a subject where dreams or psychedelic visions are treated as unquoted evidence) this may indicate that you simply don't understand that subject and should learn more before attempting to judge their statements.
For instance, I would expect that a person who had no higher math would consider correct statements in category theory to be indistinguishable from gibberish.
I don't think that being able to state the strongest counterargument to your position is a good proxy for the ideological Turing test (or vice versa). Most people with strong political views tend not to have very strong arguments for those views, precisely because they don't carefully consider counterarguments from the opposing side. So if I were the judge in an ideological Turing test, and the test subject made a sophisticated and nuanced argument for his position, one that is constructed as a strong refutation of the opposing side rather than just as a s...
It's good that you classify this as for "beliefs of the form 'policy X will result in outcome Y')". That may be answerable. Much political discussion and dispute is more about relative preferences and goals than matters of fact. For example, gay marriage. Few people honestly dispute any matter of fact in the gay marriage debate, nor would many people's minds be changed by learning they were wrong on a matter of fact. It's an argument about values and preferences.
A friend played ideological turing test on the web in some newsgroup, eventually becoming an ideological leader for his pretend side. Probably a good exercise.
The idea of an "ideological Turing test" arose from a disagreement between Bryan Caplan and Paul Krugman (two economists with different views). Caplan and Krugman never (so far as I know) engaged in any such test, but some other economists with different views apparently have done. See, e.g., the Wikipedia article.
One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject. For example, only assign 90% confidence if you've read ten academic papers.
Easy hack:
I don't think many people do this as such, but there are less self-aware versions of the same procedure that do happen in practice. For example, if you hang out on any r...
I expect it would be hard to obtain good data about the actual results of implemented politics. (Not policies, which is a much more general term; just those policies adopted through a highly politicized process, like national laws or budget changes.)
This is for two reasons. First, major policy changes mostly happen when power changes hands, and a lot of changes happen together; their effects are hard to disentangle from one another.
Second, because most policies are attempts to influence human behavior. And people's reaction to political policies is itself...
there's no easy way to assess whether a belief is true or false, in contrast to the trivia questions in the credence game. Moreover, it’s very easy to become mindkilled by politics. What do you do?
Tentatively preserve what is working, based on regular testing to confirm it is still working. Cautiously adopt changes with an emphasis on how to minimize the ways they can go wrong, not the possible benefits if they go right. Test newly adopted policies regularly. This was how 'conservatism' might have described itself at one time, but that word has other meanings now.
Mostly decisions about large scale intervetions could be settled in economics terms. However, MIRI recent strategic change to math questions inform us the difficulty of heuristics approach. The amount of resources needed to find very noisy information about factors for some kind of scenario, seems very costly.
So you're playing the credence game, and you’re getting a pretty good sense of which level of confidence to assign to your beliefs. Later, when you’re discussing politics, you wonder how you can calibrate your political beliefs as well (beliefs of the form "policy X will result in outcome Y"). Here there's no easy way to assess whether a belief is true or false, in contrast to the trivia questions in the credence game. Moreover, it’s very easy to become mindkilled by politics. What do you do?
In the credence game, you get direct feedback that allows you to learn about your internal proxies for credence, i.e., emotional and heuristic cues about how much to trust yourself. With political beliefs, however, there is no such feedback. One workaround would be to assign high confidence only to beliefs for which you have read n academic papers on the subject. For example, only assign 90% confidence if you've read ten academic papers.
To account for mindkilling, use a second criterion: assign high confidence only to beliefs for which you are ideologically Turing-capable (i.e., able to pass an ideological Turing test). As a proxy for an actual ideological Turing test, you should be able to accurately restate your opponent’s position, or be able to state the strongest counterargument to your position.
In sum, to calibrate your political beliefs, only assign high confidence to beliefs which satisfy extremely demanding epistemic standards.