SforSingularity comments on Optimal Strategies for Reducing Existential Risk - Less Wrong

3 Post author: FrankAdamek 31 August 2009 03:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: SforSingularity 01 September 2009 01:52:47PM *  1 point [-]

I think that there may be clever ways that a co-operating group of risk-reducers can "game" the current socio-economic system.

Specifically, we should be much more risk-tolerant in our acquisition of money than the average person with our abilities. A career in a large firm such as a law firm is certainly good, but why not take an option such as entrepreneurship that has a long tail of increasingly high returns? If a sizeable group (say, 30 people) of co-operating risk-reducers all take high-risk, high-reward paths, they can produce a greater expected return than if they pursued the usual cautious, steady job routes.

In cases where existential risk is mitigated in a way that also allows the risk-mitigators to survive - for example because an FAI is built within their lifetimes, or they are successfully cryopreserved and then reanimated, they can arrange for the post-risk society to reward those who took risk-mitigation action such that, taking into account the discount rates of the mitigators, the risk mitigating action was on balance a positive contribution to the future discounted reward of each individual mitigator from the point of view of the mitigator today. This could be construed as akin to a financial instrument.

Comment author: FrankAdamek 02 September 2009 03:55:53AM 2 points [-]

they can arrange for the post-risk society to reward those who took risk-mitigation action

Beyond immortality, any conceivable VR experience, and the ability to turn our current happy-sad gradients into gradients of bliss?

Comment author: SforSingularity 02 September 2009 01:24:28PM 1 point [-]

You'd get all that even if you did nothing to help. We have a free rider problem as far as a positive singularity goes.

Comment author: FrankAdamek 03 September 2009 12:42:58AM 1 point [-]

Yes that's true. My hope is that other people work like I do and view the small reduction in risk they can accomplish as worthwhile, as even though everyone gets the benefits, a slightly higher chance of those benefits is a pretty neat thing. But, that's a mere hope.

It seems very difficult to say who was helping and who wasn't, and the motivational power of such an idea is proporational to the probability of such a posthuman future being realized. With so much uncertainty I don't think many would take it seriously. But if it could be done, might not be bad. If they worked, a Pascalian Mugging would be nice, just postulate risk-mitigator-favoritism high enough.

Comment author: SforSingularity 03 September 2009 10:39:59AM 0 points [-]

and the motivational power of such an idea is proporational to the probability of such a posthuman future being realized

what do you think that probability is?

Comment author: FrankAdamek 04 September 2009 03:49:53PM 1 point [-]

Off hand I'd think it "very small", as it requires both a future in which people (or their continuations) are around, and that significant power is held by a group (however large) who thinks we should reward and punish people as such, and/or have successfully precommitted to do so.

Comment author: SforSingularity 04 September 2009 04:57:13PM *  1 point [-]

Also, suppose that there are and will be 1000 singularitarian activists who can, together, increase the probability of a positive singularity outcome from 0.1 to 0.2, and you are average amongst them. The benefit that accrues to you if you spend time working with the singularitarian movement is then delta U * 0.1/10,000 = 10^(-5) delta U, where delta U is the utility difference between the expected utility of the life you will live conditional upon existential disaster (which won't occur for quite a while - at least 15 years from today) and the utility of the life you will live conditional upon a positive singularity outcome.

I doubt that anyone really has a utility function that supports a delta U of 100,000 times the typical utility differences in our everyday lives, e.g. 100,000 times the utility difference of spending money on a nice house, an expensive family, etc. Therefore the goodness of a post positive singularity outcome cannot incentivize the individual to bring it about, to the singularitarian movement has to rely upon people whose personal notion of goodness comes from being the kind of person who puts others before themselves, even in the face of criticism and ostracism from those others.

That is, unless there is some kind of reward/punishment precommitment going on.

Comment author: FrankAdamek 06 September 2009 02:08:11AM *  2 points [-]

While adopting a virtue ethic of being the sort of person who works against existential risk may result in ostracism IF you reveal it, if we assume that ostracism hurts efforts to reduce that risk then the rational thing for such a person to do would be to keep it to themselves.

But yes, it may happen that it would be rational to bring up such issues, get one (important?) person involved and motivated and simultaneously ostracize yourself from everyone else. Then you would need to be a person who cared more about others' wellbeing than what those people think of you. Which, IMHO, is pretty damn cool.

Comment author: SforSingularity 07 September 2009 09:36:25PM 0 points [-]

The larger problem is that people close to one - one's partner, parents, close friends - will all find out sooner or later; indeed attempting to hide it is probably even worse as it erodes trust.

Comment author: SforSingularity 07 September 2009 09:39:56PM *  0 points [-]

Which, IMHO, is pretty damn cool.

it isn't cool if everyone ostracizes you and your life sucks whilst you work to save everyone, and then afterwards you get no acknowledgment; at least not in my book, especially if the problem is so large that the incremental reduction in risk you can achieve is very very small.

But in reality, I think that there are third options, side benefits to being involved in the risk-reduction movement (the other people in the movement are nice and smart, which means that they are great to be friends with, and they influence you positively, it provides personal motivation beyond what you would normally have, and if you are good the incremental reduction in risk you can achieve is large enough that you make a substantial improvement to your own prospects), so actually I think that being a risk-reducer is a personal gain, at least the way the current situation is.

If the situation changed so that it was a heavy personal loss (e.g. you could maximally reduce risk by sacrificing your life or risking a serious probability of that for the cause) then I would want to heavily advocate incentivization in some form; otherwise, a lot of people will drop away from the movement (not necessarily me, though I would have to do some soul-searching).

Comment author: FrankAdamek 14 September 2009 10:22:35AM 2 points [-]

Though they end up being small factors in my own considerations, I like the mention of the side benefits of being part of such a group.

Comment author: Eliezer_Yudkowsky 04 September 2009 08:19:46PM 0 points [-]

You appear to assume that rationalists are selfish? Or that our "real selves" are exclusively sub-deliberative systems that can't multiply benefits to others?

Comment author: SforSingularity 04 September 2009 04:37:16PM *  0 points [-]

How about the consideration that, out of all good futures that suffer from a tragedy-of-the-commons type problem, those that implement reward/punishment precommitments are more likely to overcome the free rider problem and actually work? Does this not push the probability up somewhat?

Comment author: SforSingularity 01 September 2009 02:25:05PM 0 points [-]

I see that utilitarian has already made this point:

Summary. In many cases, the good accomplished by money is approximately proportional to the amount donated, so that traditional arguments for being risk averse with respect to wealth don't apply. In such circumstances, utilitarians should take advantage of economic risk premia, such as those that accrue to riskier stocks. (For instance, in the context of the Capital Asset Pricing Model, "riskier" means "higher beta," i.e., higher scaled covariance with market returns.)