Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rain comments on The $125,000 Summer Singularity Challenge - Less Wrong

20 Post author: Kaj_Sotala 29 July 2011 09:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (259)

You are viewing a single comment's thread. Show more comments above.

Comment author: Rain 29 July 2011 06:19:00PM *  7 points [-]

They planned on doing an academic paper on the topic, though it hasn't been completed yet. Here's Anna Salamon's presentation, estimating 8 lives saved per dollar donated to SingInst.

Comment author: peter_hurford 29 July 2011 07:35:35PM 3 points [-]

8 lives per dollar is an awful, awful lot, but I'll definitely check out those resources. If the 8 lives per dollar claim is true, I'll be spending my money on SI.

Comment author: jsteinhardt 29 July 2011 07:59:08PM 5 points [-]

If a back-of-the-envelope calculation comes up with a number like that, then it is probably wrong.

Comment author: steven0461 29 July 2011 08:48:15PM *  10 points [-]

I haven't watched the presentation, but 8 lives corresponds to only a one in a billion chance of averting human extinction per donated dollar, which corresponds (neglecting donation matching and the diminishing marginal value of money) to roughly a 1 in 2000 chance of averting human extinction from a doubling of the organization's budget for a year. That doesn't sound obviously crazy to me, though it's more than I'd attribute to an organization just on the basis that it claimed to be reducing extinction risk.

Comment author: MichaelVassar 01 August 2011 03:37:52PM *  2 points [-]

For what it's worth, this is in line with my estimates, which are not just on the basis of claimed interest in x-risk reduction. I don't think that an order of magnitude or more less than this level of effectiveness could be the conclusion of a credible estimation procedure.

Comment author: Rain 29 July 2011 08:27:51PM *  4 points [-]

The topics of existential risk, AI, and other future technologies inherently require the use of very large numbers, far beyond any of those encountered when discussing normal, everyday risks and rewards.

Comment author: steven0461 29 July 2011 08:52:01PM *  18 points [-]

Note that the large number used in this particular back-of-envelope calculation is the world population of several billion, not the still much larger numbers involved in astronomical waste.

Comment author: jsteinhardt 29 July 2011 10:57:49PM 7 points [-]

Even if this is so, there is tons of evidence that humans suck at reasoning about such large numbers. If you want to make an extraordinary claim like the one you made above, then you need to put forth a large amount of evidence to support it. And on such a far-mode topic, the likelihood of your argument being correct decreases exponentially with the number of steps in the inferential chain.

I only skimmed through the video, but assuming that the estimates at 11:36 are what you're referring to, those numbers are both seemingly quite high and entirely unjustified in the presentation. It also overlooks things like the fact that utility doesn't scale linearly in number of lives saved when calculating the benefit per dollar.

Whether or not those numbers are correct, presenting them in their current form seems unlikely to be very productive. Likely either the person you are talking to already agrees, or the 8 lives figure triggers an absurdity heuristic that will demand large amounts of evidence. Heck, I'm already pretty familiar with the arguments, and I still get a small amount of negative affect whenever someone tries to make the "donating to X-risk has <insert very large number> expected utility".

I don't think anyone on LW disagrees that reducing xrisk substantially carries an extremely high utility. The points of disagreement are over whether SIAI can non-trivially reduce xrisk, and whether they are the most effective way to do so. At least on this website, this seems like the more productive path of discussion.

Comment author: Vladimir_Nesov 29 July 2011 11:37:54PM 7 points [-]

Keep in mind that estimation is the best we have. You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.

Comment author: jsteinhardt 30 July 2011 11:09:46AM 4 points [-]

You can't appeal to Nature for not having been given a warning that meets a sufficient standard of rigor.

From a Bayesian point of view, your prior should place low probability on a figure like "8 lives per dollar". Therefore, lots of evidence is required to overcome that prior.

From a decision-theoretic point of view, the general strategy of believing sketchy (with no offense intended to Anna; I look forward to reading the paper when it is written) arguments that reach extreme conclusions at the end is a bad strategy. There would have to be a reason why this argument was somehow different from all other arguments of this form.

Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.

If there were tons of actions lying around with similarly huge potential positive consequences, then I would be first in line to take them (for exactly the reason you gave). As it stands, it seems like in reality I get a one-time chance to reduce p(bad singularity) by some small amount. More explicitly, it seems like SIAI's research program reduces xrisk by some small amount, and a handful of other programs would also reduce xrisk by some small amount. There is no combined set of programs that cumulatively reduces xrisk by some large amount (say > 3% to be explicit).

I have to admit that I'm a little bit confused about how to reason here. The issue is that any action I can personally take will only decrease xrisk by some small amount anyways. But to me the situation feels different if society can collectively decrease xrisk by some large amount, versus if even collectively we can only decrease it by some small amount. My current estimate is that we are in the latter case, not the former --- even if xrisk research had unlimited funding, we could only decrease total xrisk by something like 1%. My intuitions here are further complicated by the fact that I also think humans are very bad at estimating small probabilities --- so the 1% figure could very easily be a gross overestimate, whereas I think a 5% figure is starting to get into the range where humans are a bit better at estimating, and is less likely to be such a bad overestimate.

Comment author: paulfchristiano 31 July 2011 04:56:40AM 4 points [-]

From a Bayesian point of view, your prior should place low probability on a figure like "8 lives per dollar". Therefore, lots of evidence is required to overcome that prior.

My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don't look so unlikely to me.

There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.

There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren't 97% confident that we have so little control over the future (I've thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.

Of course this isn't an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.

Comment author: jsteinhardt 31 July 2011 04:13:42PM 1 point [-]

you should continue to optimize in good faith.

Can you clarify what you mean by this?

Comment author: paulfchristiano 02 August 2011 07:29:06AM *  0 points [-]

Only that you consider the arguments you have advanced in good faith, as a difficulty and a piece of evidence rather than potential excuses.

Comment author: Rain 29 July 2011 11:15:19PM 2 points [-]

I don't think anyone on LW disagrees that reducing xrisk substantially carries an extremely high utility.

I'm glad you agree.

The points of disagreement are over whether SIAI can non-trivially reduce xrisk, and whether they are the most effective way to do so. At least on this website, this seems like the more productive path of discussion.

I'd be very appreciative to hear if you know of someone doing more.

Comment author: multifoliaterose 29 July 2011 11:51:08PM *  6 points [-]

I'd be very appreciative to hear if you know of someone doing more.

Over the coming months I'm going to be doing an investigation of the non-profits affiliated with the Nuclear Threat Initiative with a view toward finding x-risk reduction charities other than SIAI & FHI. I'll report back what I learn but it may be a while.

Comment author: ciphergoth 31 July 2011 05:45:23PM 4 points [-]

I'm under the impression that nuclear war doesn't pose an existential risk. Do you disagree? If so, I probably ought to make a discussion post on the subject so we don't take this one too far off topic.

Comment author: multifoliaterose 31 July 2011 08:23:56PM *  8 points [-]

My impression is that the risk of immediate extinction due to nuclear war is very small but that a nuclear war could cripple civilization to the point of not being able to recover enough to affect a positive singularity; also it would plausibly increase other x-risks - intuitively, nuclear war would destabilize society, and people are less likely to take safety precautions in an unstable society when developing advanced technologies than they otherwise would be. I'd give a subjective estimate of 0.1% - 1% of nuclear war preventing a positive singularity.

Comment author: steven0461 31 July 2011 09:15:11PM *  4 points [-]

I'd give a subjective estimate of 0.1% - 1% of nuclear war preventing a positive singularity.

Do you mean:

  • The probability of PS given NW is .1-1% lower than the probability of PS given not-NW
  • The probability of PS is .1-1% lower than the probability of PS given not-NW
  • The probability of PS is 99-99.9% times the probability of PS given not-NW
  • etc?
Comment author: multifoliaterose 31 July 2011 09:29:45PM 1 point [-]

Good question. My intended meaning was the second of the meanings that you listed "the probability of a positive singularity is 0.1%-1% lower than the probability of a positive singularity given no nuclear war." Would be interested to hear any thoughts that you have about these things.

Comment author: ciphergoth 01 August 2011 05:28:15AM 1 point [-]

Thanks for the clarification on the estimate. Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we'd have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.

Comment author: gjm 02 August 2011 07:56:03PM 4 points [-]

Building society the first time around, we were able to take advantage of various useful natural resources such as relatively plentiful coal and (later) oil. After a nuclear war or some other civilization-wrecking catastrophe, it might be Very Difficult Indeed to rebuild without those resources at our disposal. It's difficult enough even now, with everything basically still working nicely, to see how to wean ourselves off fossil fuels, as for various reasons many people think we should do. Now imagine trying to build a nuclear power industry or highly efficient solar cells with our existing energy infrastructure in ruins.

So it looks to me as if (1) our best prospects for long-term x-risk avoidance all involve advanced technology (space travel, AI, nanothingies, ...) and (2) a major not-immediately-existential catastrophe could seriously jeapordize our prospects of ever developing such technology, so (3) such a catastrophe should be regarded as a big increase in x-risk.

Comment author: ArisKatsaris 02 August 2011 09:41:17PM 0 points [-]

because we'd have more time to think about existential risk mitigation while we rebuild society."

A more likely result: the religious crazies will take over, and they either don't think existential risk can exist (because God would prevent them) or they think preventing existential risk would be blasphemy (because God ought be allowed to destroy us). Or they even actively work to make it happen and bring about God's judgmenent.

And then humanity dies, because both denying and embracing existential risk causes it to come nearer.

Comment author: timtyler 02 August 2011 09:41:48AM 0 points [-]

Unhappy as it makes me to say it, I suspect that nuclear war or other non-existential catastrophe would overall reduce existential risk, because we'd have more time to think about existential risk mitigation while we rebuild society. However I suspect that trying to bring nuclear war about as a result of this reasoning is not a winning strategy.

Technical challenges? Difficulty in coordinating? Are there other candidate setbacks?

Comment author: multifoliaterose 01 August 2011 06:52:35PM *  0 points [-]

because we'd have more time to think about existential risk mitigation while we rebuild society

  1. It may be highly unproductive to think about advanced future technologies in very much detail before there's a credible research program on the table on account of the search tree involving dozens of orders of magnitude. I presently believe in this to be the case.

  2. I do think that we can get better at some relevant things at present (learning how to obtain as accurate as realistically possible predictions about probable government behaviors, etc.) and that all else being equal we could benefit from more time thinking about these things rather than less time.

  3. However, it's not clear to me that the time so gained would outweigh a presumed loss in clear thinking post-nuclear war and I currently believe that the loss would be substantially greater than the gain.

  4. As steven0461 mentioned, "some people within SingInst seem to have pretty high estimates of the return from efforts to prevent nuclear war." I haven't had a chance to talk about this with them in detail; but it updates me in the direction of attaching high expected value reduction to nuclear war risk reduction.

My positions on these points are very much subject to change with incoming information.

Comment author: jsteinhardt 30 July 2011 10:47:16AM 5 points [-]

Well for instance, certain approaches to AGI are more likely to lead to something friendly than other approaches are. If you believe that approach A is 1% less likely to lead to a bad outcome than approach B, then funding research in approach A is already compelling.

In my mind, a well-reasoned statistical approach with good software engineering methodologies is the mainstream approach that is least likely to lead to a bad outcome. It has the advantage that there is already a large amount of related research being done, hence there is actually a reasonable chance that such an AGI would be the first to be implemented. My personal estimate is that such an approach carries about 10% less risk than an alternative approach where the statistics and software are both hacked together.

In contrast, I estimate that SIAI's FAI approach would carry about 90% less risk if implemented than a hacked-together AGI. However, I assign very low probability to SIAI's current approach succeeding in time. I therefore consider the above-mentioned approach more effective.

Another alternative to SIAI that doesn't require estimates about any specific research program would be to fund the creation of high-status AI researchers who care about Friendliness. Then they are free to steer the field as a whole towards whatever direction is determined to carry the least risk, after we have the chance to do further research to determine that direction.

Comment author: Wei_Dai 30 July 2011 06:30:00PM 4 points [-]

My personal estimate is that such an approach carries about 10% less risk than an alternative approach where the statistics and software are both hacked together.

I don't understand what you mean by "10% less risk". Do you think any given project using "a well-reasoned statistical approach with good software engineering methodologies" has at least 10% chance of leading to a positive Singularity? Or each such project has a P*0.9 probability of causing an existential disaster, where P is the probability of disaster of a "hacked together" project. Or something else?

Comment author: jsteinhardt 31 July 2011 12:55:07AM 2 points [-]

Sorry for the ambiguity. I meant P*0.9.

Comment author: Wei_Dai 31 July 2011 02:15:03AM 1 point [-]

You said "I therefore consider the above-mentioned approach more effective.", but if all you're claiming is that the above mentioned approach ("a well-reasoned statistical approach with good software engineering methodologies") has a P*0.9 probability of causing an existential disaster, and not claiming that it has a significant chance of causing a positive Singularity, then why do you think funding such projects is effective for reducing existential risk? Is the idea that each such project would displace a "hacked together" project that would otherwise be started?

Comment author: jsteinhardt 31 July 2011 04:07:54PM *  0 points [-]

EDIT: I originally misinterpreted your post slightly, and corrected my reply accordingly.

Not quite. The hope is that such a project will succeed before any other hacked-together project succeeds. More broadly, the hope is that partial successes using principled methodologies will convince them to be more widely adopted in the AI community as a whole, and more to the point that a contingent of highly successful AI researchers advocating Friendliness can change the overall mindset of the field.

The default is a hacked-together AI project. SIAI's FAI research is trying to displace this, but I don't think they will succeed (my information on this is purely outside-view, however).

An explicit instantiation of some of my calculations:

SIAI approach: 0.1% chance of replacing P with 0.1P Approach that integrates with the rest of the AI community: 30% chance of replacing P with 0.9P

In the first case, P is basically staying constant, in the second case it is being replaced with 0.97P.

Comment author: Rain 30 July 2011 03:47:39PM 3 points [-]

I noticed you didn't name anybody. Did you have specific programs or people in mind?

We already seem to (roughly) agree on probabilities.

Comment author: jsteinhardt 02 August 2011 09:14:21PM *  2 points [-]

The only specific plan I have right now is to put myself in a position to hire smart people to work on this problem. I think the most robust way to do this is to get a faculty position somewhere, but I need to consider the higher relative efficiency of corporations over universities some more to figure out if it's worthwhile to go with the higher-volatility route of industry.

Also, as Paul notes, I need to consider other approaches to x-risk reduction as well to see if I can do better than my current plan. The main argument in favor of my current plan is that there is a clear path to the goal, with only modest technical hurdles and no major social hurdles. I don't particularly like plans that start to get fuzzier than that, but I am willing to be convinced that this is irrational.

EDIT: To be more explicit, my current goal is to become one of said high-status AI researchers. I am worried that this is slightly self-serving, although I think I have good reason to believe that I have a comparative advantage at this task.

Comment author: MugaSofer 17 April 2013 02:02:06PM -2 points [-]

The only specific plan I have right now is to put myself in a position to hire smart people to work on this problem.

You know, I think somebody already thought of this. What was their name again...?

Comment author: JGWeissman 30 July 2011 05:39:12PM 2 points [-]

Another alternative to SIAI that doesn't require estimates about any specific research program would be to fund the creation of high-status AI researchers who care about Friendliness.

That seems more of an alternative within SIAI than an alternative to SIAI. With more funding, their Associate Research Program can promote the importance of Friendliness and increase the status of researchers who care about it.

Comment author: MugaSofer 17 April 2013 01:59:13PM 2 points [-]

It also overlooks things like the fact that utility doesn't scale linearly in number of lives saved when calculating the benefit per dollar.

Woah, woah! What! Since when?

Unless you mean "scope insensitivity"?

8 lives figure triggers an absurdity heuristic that will demand large amounts of evidence.

Well, sure, the absurdity heuristic is terrible.

Comment author: jsteinhardt 18 April 2013 07:58:35AM 4 points [-]

Woah, woah! What! Since when?

Why would it scale linearly? I agree that is scales linearly over relatively small regimes (on the order of millions of lives) by fungibility, but I see no reason why that needs to be true for trillions of lives or more (and at least some reasons why it can't scale linearly forever).

Well, sure, the absurdity heuristic is terrible.

Re-read the context of what I wrote. Whether or not the absurdity heuristic is a good heuristic, it is one that is fairly common among humans, so if your goal is to have a productive conversation with someone who doesn't already agree with you, you shouldn't throw out such an ambitious figure without a solid argument. You can almost certainly make whatever point you want to make with more conservative numbers.

Comment author: MugaSofer 19 April 2013 01:30:56PM *  -2 points [-]

Why would it scale linearly? I agree that is scales linearly over relatively small regimes (on the order of millions of lives) by fungibility, but I see no reason why that needs to be true for trillions of lives or more (and at least some reasons why it can't scale linearly forever).

Lets say you currently have a trillion utility-producing thingies - call them humans, if it helps. You're pretty happy. In fact, you have so many that the utility of more is negligible.

Then Doctor Evil appears! He has five people hostage, he's holding them to ransom!

His ransom: kill off six of the people you already have.

Since those trillion people's value didn't scale linearly, reducing them by six isn't nearly as important as five people!

Rinse. Repeat.

Re-read the context of what I wrote. Whether or not the absurdity heuristic is a good heuristic, it is one that is fairly common among humans, so if your goal is to have a productive conversation with someone who doesn't already agree with you, you shouldn't throw out such an ambitious figure without a solid argument. You can almost certainly make whatever point you want to make with more conservative numbers.

Well sure, if we're talking Dark Arts...

Comment author: jsteinhardt 20 April 2013 08:45:46AM 5 points [-]

Since those trillion people's value didn't scale linearly, reducing them by six isn't nearly as important as five people!

This isn't true --- the choice is between N-6 and N-5 people; N-5 people is clearly better. Not to be too blunt, but I think you've badly misunderstood the concept of a utility function.

Well sure, if we're talking Dark Arts...

Actively making your argument objectionable is very different from avoiding the use of the Dark Arts. In fact, arguably it has the same problem that the Dark Arts has, which is that is causes someone to believe something (in this case, the negation of what you want to show) for reasons unrelated to the validity of the supporting argument.

Comment author: private_messaging 20 April 2013 09:23:14AM *  2 points [-]

This isn't true --- the choice is between N-6 and N-5 people; N-5 people is clearly better. Not to be too blunt, but I think you've badly misunderstood the concept of a utility function.

Yes. The hypothetical utility function could e.g. take a list of items and then return the utility. It need not satisfy f(A,B)=f(A)+f(B) where "," is list concatenation. For example, this would apply to the worth of books, where a library is more worthy than however many copies of some one book. To simply sum values of books considered independently is ridiculous, it's like valuing books by weight. Information content of the brain or what ever else it is that you might value (process?) is a fair bit more like a book than its like the weight of the books.

Comment author: MugaSofer 23 April 2013 10:43:19AM -1 points [-]

Actively making your argument objectionable is very different from avoiding the use of the Dark Arts. In fact, arguably it has the same problem that the Dark Arts has, which is that is causes someone to believe something (in this case, the negation of what you want to show) for reasons unrelated to the validity of the supporting argument.

Sorry, I only meant to imply that I had assumed we were discussing rationality, given the low status of the "Dark Arts". Not that there was anything wrong with such discussion; indeed, I'm all for it.

Comment author: CCC 20 April 2013 12:45:02PM 0 points [-]

Since those trillion people's value didn't scale linearly, reducing them by six isn't nearly as important as five people!

This doesn't hold. Those extra five should be added onto the trillion you already have; not considered seperately.

Value only needs to increase monotonically. Linearity is not required; it might even be asymptotic.

Comment author: MugaSofer 23 April 2013 10:56:18AM -1 points [-]

Those extra five should be added onto the trillion you already have; not considered seperately.

That depends on how you do the accounting here. If we check the utility provided by saving five people, it's high. If we check the utility provided by increasing a population of a trillion, it's unfathomably low.

This is, in fact, the point.

Intuitively, we should be able to meaningfully analyse the utility of a part without talking about - or even knowing - the utility of the whole. Discovering vast interstellar civilizations should not invalidate our calculations made on how to save the most lives.

Comment author: CCC 23 April 2013 12:27:46PM 2 points [-]

Let us assume that we have A known people in existence. Dr. Evil presents us with B previously unknown people, and threatens to kill them unless we kill C out of our A known people (where C<A). The question is, whether it is ethically better to let B people die, or to let C people die. (It is clearly better to save all the people, if possible).

We have a utility function, f(x), which describes the utility produced by x people. Before Dr. Evil turns up, we have A known people; and a total utility of f(A+B). After Dr. Evil arrives, we find that there are more people; we have a total utility of f(A+B) (or f(A+B+1), if Dr. Evil was previously unknown; from here onwards I will assume that Dr. Evil was previously known, and is thus included in A). Dr. Evil offers us a choice, between a total utility of f(A+B-C) or a total utility of f(A).

The immediate answer is that if B>C, it is better for B people to live; while if C>B, then it is better for C people to live. For this to be true for all A, B and C, it is necessary for f(x) to be a monotonically increasing function; that is, a function where f(y)>f(x) if and only if y>x.

Now, you are raising the possibility that there exist a number, D, of people in vast interstellar civilisations who are completely unknown to us. Then Dr. Evil's choice becomes a choice between a total utility of f(A+B-C+D) and a total utility of f(A+D). Again, as long as f(x) is monotonically increasing, the question of finding the greatest utility is simply a matter of seeing whether B>C or not.

I don't see any cause for invalidating any of my calculations in the presence of vast interstellar civilisations.

Comment author: David_Gerard 12 June 2014 02:43:48PM *  0 points [-]

Transcript; the precise wording is "You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives." It's at 12:31. The slide up at that moment during the presentation emphasises the point, this wasn't a casual aside.

Comment author: arundelo 12 June 2014 05:31:57PM 0 points [-]