Can we talk about changing the world? Or saving the world?
I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.
One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?
I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of "deserving" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the "deserving" more than the "undeserving", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?
You clipped out "to within an order of magnitude". I stated that my best-guess probability for human extinction within a century was 10^(-6 +/- 4). This is a huge confusion - 9 orders of magnitude on the probability - yet still means that I have over 80% confidence that the probability is under 10^-2. There is no contradiction here.
(It also means that, despite believing that extinction is probably one-in-a-million, I should treat it as more like one-in-a-thousand, because averaging over the meta-probability distribution naturally weights the high end. It would be a pity if this effect, of uncertainty inflating small probabilities, resulted in social feedback. When you hear me say "we should treat it as a .1% risk", I am implicitly stating that all models I can credit give a significantly lower risk. If your best model's risk-estimate is .01%, I am actually telling you that I think your model overestimates the risk.)
So, where did you get those numbers from? 10^-6? 10^-2? Why not, say, 1-10^-6 instead? Gut feeling again, and that's inevitable. You either name a number, or make decisions without the help of even this feeble model, choosing directly. From what people on this site know, they believe differently from you.
I have one of the lowest estimates, 30% for not killing off 90% of the population by 2100. Most of it comes from Unfriendly AI, with estimate of 50% of AGI foom by 2070, or 70% by 2100 (expectation of relatively low-hanging fruit, it levels off as time go... (read more)