This is a special post for quick takes by RyanCarey. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
6 comments, sorted by Click to highlight new comments since:

Causal prediction markets.

Prediction markets (and prediction tournaments more generally) may be useful for telling us not only what will happen, but which actions will achieve our goals. One proposal for getting prediction markets to help with this is to get users to make conditional predictions. For example, we can ask the question "if Biden wins the election, GDP will be higher than if Trump wins" and use that as evidence about who to elect, and so on. But conditional predictions only predict the effect of an action if the event (e.g. who is elected) is unconfounded with the outcome (GDP). It may be that higher GDP and Biden being elected have a common cause, even if electing Biden does not increase GDP directly. One way to address this would be to have the market only pay out if Biden barely wins, or Trump barely wins, so that the confounders can be assumed to be in a similar state. Another strategy for identifying the causal effect would be to randomise. We can't randomise the election result, but we can randomise other quantities. For instance, "we generate a number from 1-100, and audit company X if we generate 1. If we generate the number 1, how much tax evasion will we find?". In general, in order to design action-guiding prediction markets, it may be important to draw on identification strategies from the causal inference literature.

I haven't yet checked for existing literature on this topic. Does anyone know of any?

I see the main problem with these schemes as one of liquidity and collateral. Generally it's difficult to set up prediction markets where you condition on unlikely events (like a market on what's essentially a regression discontinuity) because assets that only pay out a small fraction of the time are not attractive to most buyers: if they don't hold a huge diversified portfolio of claims it means they must tie up a large amount of cash in collateral for small expected returns, and in those situations the cash can often get higher returns elsewhere so people simply don't trade. I think Zvi talked about this in his post on prediction markets before.

You could potentially get around this if there were big institutional players who held large diversified portfolios of shares in many different markets, since then they don't actually have to tie up anything like the maximum amount of money they could lose in their bets as collateral. For that to happen you would either need people to already be trading on these markets (so there's someone to make money from) or you would need someone to be subsidizing these markets a lot. I suspect that kind of subsidy scheme would be impractical too, so I'm not sure how to go about implementing these conditional prediction market ideas generally.

I don't know if there's any existing literature on this either; it would be nice if someone who knew could comment, though it's unlikely given how old the top comment is.

All of these seem to be good points, although I haven't given up on liquidity subsidy schemes yet.

Let's say that the election was decided by a coin flip, so there is no common cause between GDP and who is elected. Even in that case, I am still not sure why we'd think the prediction market would be useful for evaluating a counterfactual, such as "will GDP be higher under Biden or Trump?" The whole premise of prediction markets is that we evaluate them based on facts, not on counterfactuals. We might be able to use a forecaster's ability to predict facts in order to decide how much to trust them when they make counterfactual claims. But I don't see how we can evaluate a counterfactual claim via the prediction market mechanism directly. Can you elaborate on that?

Feature suggestion. Using highlighting for higher-res up/downvotes and (dis)agreevotes.

Sometimes you want to indicate what part of a comment you like or dislike, but can't be bothered writing a comment response. In such cases, it would be nice if you could highlight the portion of text that you like/dislike, and for LW to "remember" that highlighting and show it to other users. Concretely, when you click the like/dislike button, the website would remember what text you had highlighted within that comment. Then, if anyone ever wants to see that highlighting, they could hover their mouse over the number of likes, and LW would render the highlighting in that comment.

The benefit would be that readers can conveniently give more nuanced feedback, and writers can have a better understanding of how readers feel about their content. It would stop this nagging wrt "why was this downvoted", and hopefully reduce the extent to which people talk past each other when arguing.

Transformer models (like GPT-3) are generators of human-like text, so they can be modeled as quantilizers. However, any quantiliser guarantees are very weak, because they quantilise with very low q, equal to the likelihood that a human would generate that prompt.