Here's my attempt at a summary of what needs to be done:
Basic enabling technologies, computers, software, networks, AI, biotechnology, nanotechnology.
Medicine: better treatments for viral diseases, cancer, aging.
Energy: anything that has prospects of actually replacing fossil fuels. Solar electric and algae-based biofuels currently looked like the most promising lines of development as I see it.
Space: anything that could shorten the time during which all our eggs are in one basket.
And I see three major ways in which individuals can contribute:
Personally work in one of the vital areas.
Make money and spend or donate some of it to support people who are doing such work.
Last but not least, this is a marathon not a sprint: take the long view and raise children who can do one or more of these.
I think that there may be clever ways that a co-operating group of risk-reducers can "game" the current socio-economic system.
Specifically, we should be much more risk-tolerant in our acquisition of money than the average person with our abilities. A career in a large firm such as a law firm is certainly good, but why not take an option such as entrepreneurship that has a long tail of increasingly high returns? If a sizeable group (say, 30 people) of co-operating risk-reducers all take high-risk, high-reward paths, they can produce a greater expected return than if they pursued the usual cautious, steady job routes.
In cases where existential risk is mitigated in a way that also allows the risk-mitigators to survive - for example because an FAI is built within their lifetimes, or they are successfully cryopreserved and then reanimated, they can arrange for the post-risk society to reward those who took risk-mitigation action such that, taking into account the discount rates of the mitigators, the risk mitigating action was on balance a positive contribution to the future discounted reward of each individual mitigator from the point of view of the mitigator today. This could be construed as akin to a financial instrument.
they can arrange for the post-risk society to reward those who took risk-mitigation action
Beyond immortality, any conceivable VR experience, and the ability to turn our current happy-sad gradients into gradients of bliss?
You'd get all that even if you did nothing to help. We have a free rider problem as far as a positive singularity goes.
Yes that's true. My hope is that other people work like I do and view the small reduction in risk they can accomplish as worthwhile, as even though everyone gets the benefits, a slightly higher chance of those benefits is a pretty neat thing. But, that's a mere hope.
It seems very difficult to say who was helping and who wasn't, and the motivational power of such an idea is proporational to the probability of such a posthuman future being realized. With so much uncertainty I don't think many would take it seriously. But if it could be done, might not be bad. If they worked, a Pascalian Mugging would be nice, just postulate risk-mitigator-favoritism high enough.
and the motivational power of such an idea is proporational to the probability of such a posthuman future being realized
what do you think that probability is?
Off hand I'd think it "very small", as it requires both a future in which people (or their continuations) are around, and that significant power is held by a group (however large) who thinks we should reward and punish people as such, and/or have successfully precommitted to do so.
Also, suppose that there are and will be 1000 singularitarian activists who can, together, increase the probability of a positive singularity outcome from 0.1 to 0.2, and you are average amongst them. The benefit that accrues to you if you spend time working with the singularitarian movement is then delta U * 0.1/10,000 = 10^(-5) delta U, where delta U is the utility difference between the expected utility of the life you will live conditional upon existential disaster (which won't occur for quite a while - at least 15 years from today) and the utility of the life you will live conditional upon a positive singularity outcome.
I doubt that anyone really has a utility function that supports a delta U of 100,000 times the typical utility differences in our everyday lives, e.g. 100,000 times the utility difference of spending money on a nice house, an expensive family, etc. Therefore the goodness of a post positive singularity outcome cannot incentivize the individual to bring it about, to the singularitarian movement has to rely upon people whose personal notion of goodness comes from being the kind of person who puts others before themselves, even in the face of criticism and ostracism from those others.
That is, unless there is some kind of reward/punishment precommitment going on.
While adopting a virtue ethic of being the sort of person who works against existential risk may result in ostracism IF you reveal it, if we assume that ostracism hurts efforts to reduce that risk then the rational thing for such a person to do would be to keep it to themselves.
But yes, it may happen that it would be rational to bring up such issues, get one (important?) person involved and motivated and simultaneously ostracize yourself from everyone else. Then you would need to be a person who cared more about others' wellbeing than what those people think of you. Which, IMHO, is pretty damn cool.
Which, IMHO, is pretty damn cool.
it isn't cool if everyone ostracizes you and your life sucks whilst you work to save everyone, and then afterwards you get no acknowledgment; at least not in my book, especially if the problem is so large that the incremental reduction in risk you can achieve is very very small.
But in reality, I think that there are third options, side benefits to being involved in the risk-reduction movement (the other people in the movement are nice and smart, which means that they are great to be friends with, and they influence you positively, it provides personal motivation beyond what you would normally have, and if you are good the incremental reduction in risk you can achieve is large enough that you make a substantial improvement to your own prospects), so actually I think that being a risk-reducer is a personal gain, at least the way the current situation is.
If the situation changed so that it was a heavy personal loss (e.g. you could maximally reduce risk by sacrificing your life or risking a serious probability of that for the cause) then I would want to heavily advocate incentivization in some form; otherwise, a lot of people will drop away from the movement (not necessarily me, though I would have to do some soul-searching).
Though they end up being small factors in my own considerations, I like the mention of the side benefits of being part of such a group.
The larger problem is that people close to one - one's partner, parents, close friends - will all find out sooner or later; indeed attempting to hide it is probably even worse as it erodes trust.
You appear to assume that rationalists are selfish? Or that our "real selves" are exclusively sub-deliberative systems that can't multiply benefits to others?
How about the consideration that, out of all good futures that suffer from a tragedy-of-the-commons type problem, those that implement reward/punishment precommitments are more likely to overcome the free rider problem and actually work? Does this not push the probability up somewhat?
I see that utilitarian has already made this point:
Summary. In many cases, the good accomplished by money is approximately proportional to the amount donated, so that traditional arguments for being risk averse with respect to wealth don't apply. In such circumstances, utilitarians should take advantage of economic risk premia, such as those that accrue to riskier stocks. (For instance, in the context of the Capital Asset Pricing Model, "riskier" means "higher beta," i.e., higher scaled covariance with market returns.)
Given my current position, I don't have a comparative advantage in reducing existential risk. So my strategy is (will be? akrasia...) to donate my time to the cause indirectly through money.
When it comes to my future day job, I'm pretty sure where to spend my marginal time: economic growth. Bringing growth to the non-western world brings more potential minds to work on the problems while increasing growth in general makes it cheaper, relatively speaking, not only to do the research that needs to be done, but also for people like me donate to the cause.
I suggest this basic plan for anyone who doesn't have a comparative advantage in working the problems involved in reducing existential risk: Go to your own field and do research that enables work on these problems. In economics, I am suggesting this is growth.
Also see rwallace's post - your babies can do the work you couldn't!
I would suggest volunteering for, or donating to SIAI, Lifeboat Foundation, and/or FHI. That's been my strategy for the last decade or so.
It is possible that embarking upon an exceptionally high-risk, high-reward moneymaking strategy and precommitting most of the profit to a well-chosen collection of existential risk mitigation strategies, such as stock options for SIAI and/or FHI, could increase your subjective probability that your high-risk strategy will succeed from close to zero to close to certainty for anthropic reasons.
Specifically, if you are the only person in the world who is at all likely to donate $1 billion to existential risk mitigation, and the survival probability for planet earth this century is 1% conditional on current meagre funding levels, and 60% conditional on $1 billion in well-targeted, dedicated risk-mitigation funding, then conditioning on your own continued existence, the probability of success for your company/investment can increase by a large factor, depending on various conditional probabilities.
Why not condition directly on the successful outcome then? I'm fairly certain it's a confusion to take the above reasoning as an argument for decision-making.
I'm fairly certain it's a confusion to take the above reasoning as an argument for decision-making.
I think that there is some genuine confusion here, caused by our false naive ideas about forward-in-time continuity of human consciousness. Naively, we think that there is always a well-defined unique person at any future time that is "me", and that that "future me" defines what I will experience, so we think of an existential catastrophe event as causing the "me" post that event to be some kind of tortured, disembodied soul.
In reality, post the catastrophe, there is not a unique "me".
If we take a many-worlds stance on QM, then if there is a catastrophe with probability p that kills everyone, the multiverse post catastrophe will contain branches that still contain me - in a ratio of (1-p):p. If p is close to 1, this means that most branches do not contain a "me". However, the surviving branches contain 10^LOTS copies of me, because even if 1-p is small, QM branches so much that (1-p)*(total number of branches) will still be a huge number.
But now we have an axiological decision to make: how are we to evaluate the goodness of the outcome? Intuitively, one wants to ask what I will experience, and optimize that. But there are two distinct ways we can formalize this intuition: one is to minimize the probability p of death, the other is to optimize quality of life in those branches that survive, irrespective of p.
Personally, I like the idea of pursuing an average strategy that assigns some importance to number of survivors, and some importance to quality of each survivor; in my case I think that I place a premium on quality.
I've been talking to a variety of people about this recently, and it was suggested that people (including myself) might benefit from a LessWrong discussion on the topic. I've been thinking about it on my own for a year, which took me through Neuroscience, Computer Science, and International Security Policy. I'm hoping and finding that through discussion, a much greater variety of options can be proposed and considered, and those with particular experience or observations can have others benefit from their knowledge. I've been very happy to find there are a number of people seriously working towards this already (still far fewer than we might need), and their deliberations and learning would be particularly valuable.
This is primarily about careers and other long term focused efforts (academic research and writing on the side, etc), not smaller incremental tools such as motivation and akrasia discussions. Where you should be applying your efforts, now how (much). Unless there's a lot of interest, it might also be good to otherwise avoid discussions on self-improvement in general and how to best realize these long term concerns, bringing those up elsewhere or in a seperate post.
A few initial thoughts: