Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: paulfchristiano 31 January 2017 06:30:45AM 4 points [-]

I visit /discussion every 1-2 days on my laptop, typically spend a few seconds and then move on to something else. Sometimes I click through links or read posts if they look interesting, probably averaging ~1/day. I'm more likely to read posts with comment threads, and more likely to read posts than links. I rarely comment or vote.

Comment author: korin43 02 January 2017 01:00:14AM *  2 points [-]

I think my answer to all of this is: that sounds great but wouldn't it be better if it wasn't random?

If you have the skills and interest to do charity evaluation, why wait to win the lottery when you could join or start a charity evaluator? If you need money, running a fundraiser seems better than hoping to win the lottery.

If you think you're likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results, rather than the more convoluted method of writing blog posts to convince people to join a lottery and then hoping to win.

And if you aren't very interested in charity research, why join a donor lottery that picks the decider at random when you could join one where it's always the most competent member (100% of the time, GiveWell gets to decide how to allocate the donation)?

Comment author: paulfchristiano 03 January 2017 01:32:06AM *  3 points [-]

I think my answer to all of this is: that sounds great but wouldn't it be better if it wasn't random?

Why would that be better?

If you think you're likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results

I think you are radically, radically underestimating the difficulty of reaching consensus on challenging questions.

For example: a significant fraction of openphil staff make significant contributions to charities other than GiveWell recommendations, and that in many cases they haven't reached consensus with each other; some give to farm animal welfare, some to science, some to political causes, etc.; even within causes there is significant disagreement. This is despite the fact that they spend most of their time thinking about philanthropy (though not about their personal giving).

why join a donor lottery that picks the decider at random when you could join one where it's always the most competent member

If you will certainly follow GiveWell recommendations after winning, then gambling makes no difference and isn't worth the effort (though hopefully it will eventually take nearly 0 effort, so it's really a wash). If you think that GiveWell is the most competent decider, yet somehow don't think that you will follow their recommendations, then I'm not sure what to say to you. If you are concerned about other people making bad decisions with their money, well that's not really your problem and it's orthogonal to whether they gamble with it.

Comment author: korin43 31 December 2016 02:24:57PM *  3 points [-]

I'm not OP, but I have similar feelings about GiveWell. They have 19 full-time employees (at least 8 which are researchers). I am one person with a full-time non-research non-charity job. Assume I spend 40 hours on this if I win (around a month of free time). Running the numbers, I expect GiveWell to be able to spend at least 400x more time on this, and I expect their work to be far more productive because they wouldn't be running themselves ragged with (effectively) two jobs, and the average GiveWell researcher already has more than a year of experience doing this and the connections that it comes with.

Regarding the target audience, I feel like the kinds of people who would enjoy doing this should either apply for a job at GiveWell, or start a new charity evaluator. If you think you can do better than they can, why rely a lottery victory to prove it?

Comment author: paulfchristiano 31 December 2016 05:25:59PM *  6 points [-]

I agree that GiveWell does high-quality research and identifies effective giving opportunities, and that donors can do reasonably well by deferring to their recommendations. I think it is not at all crazy to suspect that you can do better, and I do not personally give to GiveWell recommended charities. Note for example that Holden also does not donate exclusively to GiveWell charities, and indeed is generally supportive of using either lotteries or delegation to trusted individuals.

  1. GiveWell does not purport to solve the general problem of "where should EA's give money." They purport to evaluate one kind of intervention: "programs that have been studied rigorously and ideally repeatedly, and whose benefits we can reasonably expect to generalize to large populations, though there are limits to the generalizability of any study results. The set of programs fitting this description is relatively limited, and mostly found in the category of health interventions" (here)

  2. The situation isn't "you think for X hours, and the more hours you think the better the opportunities you can find, which you can then spend arbitrarily large amounts of money on." You need to do thinking in order to identify opportunities to do good, which can accept a certain amount of money. In order to have identify a better donation opportunity than GiveWell, one does not have to do more work than GiveWell / delegate to someone who has done more work.

  3. By thinking longer, you could identify a different delegation strategy, rather than finding an object level recommendation. You aren't improving on GiveWell's research, just on your current view that GiveWell is the right person to defer to. There are many people who have spent much longer than you thinking about where to give, and at a minimum you are picking one of them. Having large piles of money and being thoughtful about where to give it is the kind of situation that (for example) makes it possible for GiveWell to get started, and it seems somewhat perverse to celebrate GiveWell while placing no value on the conditions that allow it to come to exist.

  4. In a normal world, the easiest recommendations to notice/verify/follow would receive the most attention, and so all else equal you might get higher returns by looking for recommendations that are harder.

  5. If you think GiveWell recommended charities are the best intervention, then you should be pretty much risk neutral over the scale of $100k or even $1M. So the cost is relatively low (perhaps mostly my 0.5% haircut) and you would have to be pretty confident in your view (despite the fact that many thoughtful people disagree) in order to make it worthless.

  6. The point of lotteries is not to have fun or prove that we are clever, it is to use money well.

Comment author: paulfchristiano 30 December 2016 09:59:16PM 6 points [-]

as recently implemented by Paul Christiano in cooperation with Carl Shulman

(implemented by Carl Shulman in cooperation with Paul Christiano)

In general, I think that "random dictator" may often be a better governance system than a committee or a democracy (except where there are diminishing returns and limited ability to negotiate from behind a veil of ignorance).

I think that "thinking more" is by far the most important source of returns to scale in the $1,000 - $100,000 range.

Comment author: NatashaRostova 23 December 2016 09:33:28PM 0 points [-]

How is a prediction market subsidized by someone with an interest in the information? As far as I'm aware, most of them make money on bid/ask spreads, and can be thought of as a future or Arrow–Debreu security.

As the current institutions stand there are differences. Prediction market sites and the Nasdaq are obviously different in a lot of institutional ways. In prediction markets you can't own companies. But in the more abstract way in which people trade on current information as a prediction, which is eventually realized, they are similar.

For example, a corporate bond is going to make a series of payments to the holder over its maturity. Market makers can strip off these payments and sell them as bespoke securities, so you could buy the rights to a single payment on debt from company X in 12 months. If you'd like, people can then write binary options on those such that they receive everything or nothing based on a specified strike price.

In the general security there is lots of information and dynamics, but with the right derivatives structure you can break it up into a state of a series of binary predictions.

The dynamic structure behind prediction markets and financial markets as trading current values built on models of future expectations is very similar, and I think identical.

Comment author: paulfchristiano 29 December 2016 01:16:23AM 1 point [-]

I agree with you that there is no difference in kind between the assets traded in existing financial markets and those traded in a prediction market.

Existing prediction markets primarily offer amusements to participants, and are run like other gambling sites, with a profit to the market and the average trader losing money. Existing markets may hedge some participants' risk, and in that respect are like a financial market.

Around here, prediction markets are usually proposed as an institution for making predictions (following Robin). In that context, someone who wants a prediction subsidizes the market, perhaps by subsidizing a market maker. The traders aren't trading because it's fun or they have a hedging interest, they are doing it because they are getting paid by someone who values the cognitive work they are doing.

In some cases this is unnecessary, because someone has a natural interest in influencing the prediction (e.g. if the prediction will determine whether to fire a CEO, then the CEO has a natural interest in ensuring that the prediction is favorable). In this case the decision-maker pays for the cognitive work of the traders by making a slightly suboptimal decision. Manipulative traders pay for the right to influence the decision, and informed traders are compensated by being counterparties to a manipulative trader.

I think this is the important distinction between a prediction market and other kinds of markets---in the case of prediction markets, traders make money because someone is willing to pay for the information generated by the market. I agree that this is not the case for existing prediction markets, and so it's not clear if my story is reasonable. But it is clear that there is a difference in kind between the intended use of prediction markets and other financial markets.

Comment author: NatashaRostova 21 December 2016 01:20:27AM 0 points [-]

Literally the only difference in terms of prediction dynamics is that currently prediction markets include political/non-financial questions, which are only implicitly included in financial markets.

Comment author: paulfchristiano 22 December 2016 02:55:26AM 2 points [-]

I think of the difference as: a prediction market is subsidized by someone with an interest in the information (or participants who want to influence some decision-maker who will act on the basis of market prices), while a financial market facilitates trade (or as a special case hedges risk).

Comment author: ESRogs 18 December 2016 09:41:24AM 0 points [-]

Oh, did you mean that the community has to interact with a post/comment (by e.g. upvoting it) enough for the ML system to have some data to base its judgments on?

I had been imagining that the system could form an opinion w/o the benefit of any reader responses, just from some analysis of the content (character count, words used, or even NLP), as well as who wrote it and in what context.

Comment author: paulfchristiano 19 December 2016 06:05:22PM 2 points [-]

In the long run that's possible, but I don't think that existing ML is nearly good enough to do that (especially given that people can learn to game such features).

(In general when I talk about ML in the context of moderation or discussion fora or etc., I've been imagining that user behavior is the main useful signal.)

Comment author: Stuart_Armstrong 18 December 2016 11:07:26AM 1 point [-]

I'd like people not to threaten to hit me to get my stuff. I'd like people to trade with me. I'd like people who trade with me not to precommit to taking 90% of the gains from trade. I'd also like people who trade with me not to precommit to taking 10% of the gains from trade.

Hell, if someone is going to hit me anyway, I'd like the option of paying them a little to make them hit less hard.

It seems to me that counterfactual is just another word for default - ie the alternative that would have happened if they'd not decided to extort/trade.

Comment author: paulfchristiano 19 December 2016 05:41:28PM *  0 points [-]

It seems to me that counterfactual is just another word for default - ie the alternative that would have happened if they'd not decided to extort/trade.

Sure, we can use default as another name for a particular counterfactual. Note that many people around here are already asking "how should we compute logical counterfactuals?" Thinking about defaults suggests an emphasis on different considerations, like norms, which seem like the wrong place to start.

(Note also that "what would have happened if they'd not decided to extort/trade" isn't the right counterfactual, so if that's what "default" means then I don't think that defaults are the important questions. We care about counterfactuals over our behavior, or over other peoples' beliefs about our behavior.)

Comment author: paulfchristiano 17 December 2016 10:49:33PM *  6 points [-]

Although it is standard practice around here, "blackmail" is a weird name for this phenomenon. Why not "extortion"?

In the broader world, "blackmail" is a particular kind of extortion in which the threat is to reveal information (occasionally people use it more broadly, particularly in "emotional blackmail," which seems to just come from the same kind of sloppiness that led this community to call it "blackmail"). Calling bargaining "blackmail" sounds weird because bargaining has nothing to do with revealing information, but it is quite common to recognize a fine line between bargaining and extortion.

On topic:

Presumably the distinction between extortion and trade is in counterfactuals: in the extortion case the target is better off if they are known to be unwilling to respond to extortion, while in the trade case the target is better off if they are known to be willing to respond to trade. That seems like the most promising way to get at a formal distinction. Certainly this is tricky and we don't yet have a good enough understanding of decision theory to make clean statements. But I'm not sure if thinking about it as being about a "default" is useful though, we can take a step forward by replacing default with "what would happen if the trader/extorter didn't think that trade/extortion was possible." This makes it clear that norms are mostly relevant insofar as they bear on this counterfactual.

It makes sense that an extorter would like to convince the target of extortion that things would have been much worse for the target if they weren't expected to play ball, in exactly the same way that the extorter would like to convince the target that things will be much worse for the target if they don't actually play ball, or that a trader who wants to sell X would like to convince people that X is more valuable.

This kind of deception seems pretty straightforward compared to the more sticky issue of trying to make logically prior commitments, and the general fact that we don't understand very well how decision theory should work.

Comment author: Dagon 08 December 2016 03:43:41PM 0 points [-]

It also adds an attack vector, both for those willing to spend to influence the automation, and for those wanting to make a profit on their influence over the moderators.

I'd love to see a model displayed alongside the actual karma and results, and I'd like to be able to set my thresholds for each mechanism independently. But I don't think there's any solution that doesn't involve a lot more ground-truthing by trusted evaluators.

Note that we could move one level of abstraction out - use algorithms (possibly ML, possibly simple analytics) to identify trust level in moderators, which the actual owners (those who pick the moderators and algorithms) can use to spread the moderation load more widely.

Comment author: paulfchristiano 09 December 2016 02:20:19AM 1 point [-]

It also adds an attack vector, both for those willing to spend to influence the automation

I'm optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).

and for those wanting to make a profit on their influence over the moderators

This seems harder to deal with convincingly address.

But I don't think there's any solution that doesn't involve a lot more ground-truthing by trusted evaluators.

So far I don't see any lower bounds on the amount of ground truth required. I expect that there aren't really theoretical limits---if the moderator was only willing to moderate in return for very large sums of money, then the cost per comment would be quite high, but they would potentially have to moderate very few times. I see two fundamental limits:

  • Moderation is required in order to reveal info about the moderator's behavior, which is needed by sophisticated bettors. This could also be provided in other ways.
  • Moderation is required in order to actually move money from the bad predictors to the good predictors. (This doesn't seem important for "small" forums, since then the incentive effects are always the main thing, i.e. the relevant movement of funds from bad- to good- predictors is happening at the scale of the world at large, not at the scale of a particular small forum).

View more: Next