Vaniver comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 28 December 2010 07:46:27PM -2 points [-]

This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes. Don't tell me there isn't irrational prejudice here!

The argument that any donation is subject to similar objections is silly because it's obvious that a human-welfare maximizer would plug for the the donation the donor believes best, despite the unlikelihood of finding the absolute best. It should also be obvious that my argument is that it's unlikely that the Singularity Institute comes anywhere near the best donation, and one reason it's unlikely is related to the unlikelihood of picking the best, even if you have to forgo the literal very best!

Numerous posters wouldn't pick this particular charity, even if it happened to be among the best, unless they were motivated by signaling aspirations rather than the rational choice of the best recipient. As Yvain said in the previous entry: "Deciding which charity is the best is hard." Rationalists should detect the irrationality of making an exception when one option is the Singularity Institute.

(As to whether signaling is rational, completely irrelevant to the discussion, as we're talking about the best donation from a human-welfare standpoint. To argue that the contribution makes sense because signaling might be as rational as donating, even if plausible, is merely to change the subject, rather than respond to the argument.)

Another argument for the Singularity Institute donation I can't be dismiss so easily. I read the counter-argument as saying that the Singularity Institute is clearly the best donation conceivable. To that I don't have an answer, not any more than I have a counter-argument for many outright delusions. I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain's preceding entry, where each $500 saves a human life.

Before downvoting this, ask yourself whether you're saying my point is unintelligent or shouldn't be raised for other reasons. (Ask yourself if my point should be made, was made by anyone else, and isn't better than at least 50% of the postings here. Ask yourself whether it's rational to upvote the critic and his silly argument and whether the many donors arrived at their views about the Singularity Institute's importance based on the representative heuristic, the aura effect, which surrounds Eliezer, ignoring the probability of delivering any benefit,and a multitude of other errors in reasoning.)

Comment author: Vaniver 28 December 2010 08:07:08PM *  6 points [-]

This posting above, which begins with an argument that is absolutely silly, managed to receive 11 votes.

Envy is unbecoming; I recommend against displaying it. You'd be better off starting with your 3rd sentence and cutting the word "silly."

I would ask this question: what comparison did donors make to decide the Singularity Institute is a better recipient than the one mentioned in Yvain's preceding entry, where each $500 saves a human life.

They have worked out this math, and it's available in most of their promotional stuff that I've seen. Their argument is essentially "instead of operating on the level of individuals, we will either save all of humanity, present and future, or not." And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it's a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).

The question as to whether SIAI is the best way to nudge the entire future of humanity is a separate question from whether or not SIAI is a better bet than preventing malaria deaths. I don't know if SIAI folks have made quantitative comparisons to other x-risk reduction plans, but I strongly suspect that if they have, a key feature of the comparison is that if we stop the Earth from getting hit by an asteroid, we just prevent bad stuff. If we get Friendly AI, we get unimaginably good stuff (and if we prevent Unfriendly AI without getting Friendly AI, we also prevent bad stuff).

Comment author: [deleted] 28 December 2010 09:48:15PM -2 points [-]

They have worked out this math, and it's available in most of their promotional stuff that I've seen. Their argument is essentially "instead of operating on the level of individuals, we will either save all of humanity, present and future, or not." And so if another $500 gives SIAI an additional 1 out of 7 billion chance of succeeding, then it's a better bet than giving $500 to get one guaranteed life (and that only looks at present lives).

Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal's Wager. Pascal argued that if belief in God provided the most miniscule increase in the likelihood of being heaven bound, worship was prudent in light of heaven's infinite rewards. One of the argument's fatal flaws is that there is no reason to think worshipping this god will avoid reprisals by the real god—or any number of equally improbable alternative outcomes.

The Singularity Institute imputes only finite utiles, but the flaw is the same. It could as easily come to pass that the Institute's activities make matters worse. They aren't entitled to assume their efforts to control matters won't have effects the reverse of the ones intended, any more than Pascal had the right to assume worshipping this god isn't precisely what will send one to hell. We just don't know (can't know) about god's nature by merely postulating his possible existence: we can't know that the miniscule effects don't run the other way. Similarly if not exactly the same, there's no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.

When the only reason an expectation seems to have any probability lies in its extreme tininess, the reverse outcome must be allowed the same benefit, canceling them out.

Comment author: JGWeissman 28 December 2010 11:19:06PM 10 points [-]

there's no reason to think whatever miniscule probability the Singularity Institute assigns to the hopeful outcome is a better estimate than would be had by postulating reverse miniscule effects.

When I get in my car to drive to the grocery store, do you think there is any reason to favor the hypothesis that I will arrive at the grocery store over all the a priori equally unlikely hypotheses that I arrive at some other destination?

Comment author: [deleted] 02 January 2011 01:16:39AM 1 point [-]

Depends. Do you know where the grocery store actually is? Do you have an accurate map of how to get there? Have you ever gone to the grocery store before? Or is the grocery store an unknown, unsignposted location which no human being has ever visited or even knows how to visit? Because if it was the latter, I'd bet pretty strongly against you not getting there...

Comment author: Nick_Tarleton 02 January 2011 01:25:21AM 3 points [-]

The point of the analogy is that probability mass is concentrated towards the desired outcome, not that the desired outcome becomes more likely than not.

Comment author: [deleted] 02 January 2011 01:40:18AM 0 points [-]

In a case where no examples of grocery stores have ever been seen, when intelligent, educated people even doubt the possibility of the existence of a grocery store, and when some people who are looking for grocery stores are telling you you're looking in the wrong direction, I'd seriously doubt that the intention to drive there was affecting the probability mass in any measurable amount.

Comment author: Desrtopa 05 January 2011 03:24:07PM 0 points [-]

If you were merely wandering aimlessly with the hope of encountering a grocery store, it would only affect your chance of ending up there insofar as you'd intentionally stop looking if you arrived at one, and not if you didn't. But our grocery seeker is not operating in a complete absence of evidence with regard to how to locate groceries, should they turn out to exist, so the search is, if not well focused, at least not actually aimless.

Comment author: TheOtherDave 28 December 2010 10:47:21PM 7 points [-]

I usually think about this, not as expected utility calculations based on negligible probabilities of vast outcomes being just as likely as their negations, but as them being altogether unreliable, because our numerical intuitions outside the ranges we're calibrated for are unreliable.

For example, when trying to evaluate the plausibility of an extra $500 giving SIAI an extra 1 out of 7 billion chance of succeeding, there is something in my mind that wants to say "well, geez, 1e-10 is such a tiny number, why not?"

Which demonstrates that my brain isn't calibrated to work with numbers in that range, which is no surprise.

So I do best to set aside my unreliable numerical intuitions and look for other tools with which to evaluate that claim.

Comment author: Vaniver 29 December 2010 05:57:36AM 5 points [-]

Their logic is unsound, due to the arbitrary premise; their argument has a striking resemblance to Pascal's Wager.

They're aware of this and have written about it. The argument is "just because something looks like a known fallacy doesn't mean it's fallacious." If you wanted to reason about existential risks (that is, small probabilities that all humans will die), could you come up with a way to discuss them that didn't sound like Pascal's Wager? If so, I would honestly greatly enjoy hearing it, so I have something to contrast to their method.

It could as easily come to pass that the Institute's activities make matters worse.

It's not clear to me that it's as easily, and I think that's where your counterargument breaks down. If they have a 2e-6 chance of making things better and a 1e-6 chance of making things worse, then they're still ahead by 1e-6. With Pascal's Wager, you don't have any external information about which god is actually going to be doing the judging; with SIAI, you do have some information about whether or not Friendliness is better than Unfriendliness. It's like instead of picking Jesus instead of Buddha, praying to the set of all benevolent gods; there's still a chance malevolent god is the one you end up with, but it's a better bet than picking solo (and you're screwed anyway if you get a malevolent god).

I agree with you that it's not clear that SIAI actually increases the chance of FAI occurring but I think it more likely that a non-zero effect is positive rather than negative.