gjm comments on The $125,000 Summer Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (259)
Even if this is so, there is tons of evidence that humans suck at reasoning about such large numbers. If you want to make an extraordinary claim like the one you made above, then you need to put forth a large amount of evidence to support it. And on such a far-mode topic, the likelihood of your argument being correct decreases exponentially with the number of steps in the inferential chain.
I only skimmed through the video, but assuming that the estimates at 11:36 are what you're referring to, those numbers are both seemingly quite high and entirely unjustified in the presentation. It also overlooks things like the fact that utility doesn't scale linearly in number of lives saved when calculating the benefit per dollar.
Whether or not those numbers are correct, presenting them in their current form seems unlikely to be very productive. Likely either the person you are talking to already agrees, or the 8 lives figure triggers an absurdity heuristic that will demand large amounts of evidence. Heck, I'm already pretty familiar with the arguments, and I still get a small amount of negative affect whenever someone tries to make the "donating to X-risk has <insert very large number> expected utility".
I don't think anyone on LW disagrees that reducing xrisk substantially carries an extremely high utility. The points of disagreement are over whether SIAI can non-trivially reduce xrisk, and whether they are the most effective way to do so. At least on this website, this seems like the more productive path of discussion.
Keep in mind that estimation is the best we have. You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
From a Bayesian point of view, your prior should place low probability on a figure like "8 lives per dollar". Therefore, lots of evidence is required to overcome that prior.
From a decision-theoretic point of view, the general strategy of believing sketchy (with no offense intended to Anna; I look forward to reading the paper when it is written) arguments that reach extreme conclusions at the end is a bad strategy. There would have to be a reason why this argument was somehow different from all other arguments of this form.
If there were tons of actions lying around with similarly huge potential positive consequences, then I would be first in line to take them (for exactly the reason you gave). As it stands, it seems like in reality I get a one-time chance to reduce p(bad singularity) by some small amount. More explicitly, it seems like SIAI's research program reduces xrisk by some small amount, and a handful of other programs would also reduce xrisk by some small amount. There is no combined set of programs that cumulatively reduces xrisk by some large amount (say > 3% to be explicit).
I have to admit that I'm a little bit confused about how to reason here. The issue is that any action I can personally take will only decrease xrisk by some small amount anyways. But to me the situation feels different if society can collectively decrease xrisk by some large amount, versus if even collectively we can only decrease it by some small amount. My current estimate is that we are in the latter case, not the former --- even if xrisk research had unlimited funding, we could only decrease total xrisk by something like 1%. My intuitions here are further complicated by the fact that I also think humans are very bad at estimating small probabilities --- so the 1% figure could very easily be a gross overestimate, whereas I think a 5% figure is starting to get into the range where humans are a bit better at estimating, and is less likely to be such a bad overestimate.
My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don't look so unlikely to me.
There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.
There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren't 97% confident that we have so little control over the future (I've thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.
Of course this isn't an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.
Can you clarify what you mean by this?
Only that you consider the arguments you have advanced in good faith, as a difficulty and a piece of evidence rather than potential excuses.