MugaSofer comments on The $125,000 Summer Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (259)
Even if this is so, there is tons of evidence that humans suck at reasoning about such large numbers. If you want to make an extraordinary claim like the one you made above, then you need to put forth a large amount of evidence to support it. And on such a far-mode topic, the likelihood of your argument being correct decreases exponentially with the number of steps in the inferential chain.
I only skimmed through the video, but assuming that the estimates at 11:36 are what you're referring to, those numbers are both seemingly quite high and entirely unjustified in the presentation. It also overlooks things like the fact that utility doesn't scale linearly in number of lives saved when calculating the benefit per dollar.
Whether or not those numbers are correct, presenting them in their current form seems unlikely to be very productive. Likely either the person you are talking to already agrees, or the 8 lives figure triggers an absurdity heuristic that will demand large amounts of evidence. Heck, I'm already pretty familiar with the arguments, and I still get a small amount of negative affect whenever someone tries to make the "donating to X-risk has <insert very large number> expected utility".
I don't think anyone on LW disagrees that reducing xrisk substantially carries an extremely high utility. The points of disagreement are over whether SIAI can non-trivially reduce xrisk, and whether they are the most effective way to do so. At least on this website, this seems like the more productive path of discussion.
Woah, woah! What! Since when?
Unless you mean "scope insensitivity"?
Well, sure, the absurdity heuristic is terrible.
Why would it scale linearly? I agree that is scales linearly over relatively small regimes (on the order of millions of lives) by fungibility, but I see no reason why that needs to be true for trillions of lives or more (and at least some reasons why it can't scale linearly forever).
Re-read the context of what I wrote. Whether or not the absurdity heuristic is a good heuristic, it is one that is fairly common among humans, so if your goal is to have a productive conversation with someone who doesn't already agree with you, you shouldn't throw out such an ambitious figure without a solid argument. You can almost certainly make whatever point you want to make with more conservative numbers.