It's not obvious that the best way to reduce existential risk is to actually work on the problem. Imagine if every farmer put down his plow and came to the university to study artificial intelligence research. Everyone would starve. It may well be that someone's best contribution is to continue to write software to do billing for health insurance, because that helps keep society running, which causes increased wealth, which then funds and supports people who specialize in researching risks among other fields.
I suspect that actually, only a small percentage of people, even of people here, could usefully learn the political truths relevant to existential risk mitigation via the kind of discussion you are proposing. Very few people are in a position to cause political change. The marginal utility gain for the average person to learn the truth on a political matter is practically zero due to his lack of influence on the political process. The many arguments against voting apply to this question as well, of seeking political truth; and even more so, because it's harder to ascertain political truths than to vote.
Most interest in politics is IMO similar to interest in sports or movies. It...
The set of people seriously working to reduce existential risks is very small (perhaps a few hundred, depending on who and how you count). This gives strong general reason to suppose that the marginal impact of an individual can be large, in cases where the individual aims to reduce existential risks directly and is strategic/sane/rational about how (and not in cases where the individual simply goes about their business as one of billions in the larger economy).
Many LW readers are capable of understanding that there are risks, thinking through the differential impact their donations would have on different kinds of risk mitigation, and donating money in a manner that would help. Fewer, but still many, are also capable of improving the quality of thought regarding existential risks in relevant communities (e.g., in the academic departments where they study or work, or on LW or other portions of the blogosphere). And while I agree with Hal's point that most politics is used as entertainment, there is reason to suppose that improving the quality of discussion of a very-high-impact, under-researched, tiny-numbers-of-people-currently-involved topic like existential risks can improve both (a) the well-directedness of resources like mine that are already being put toward existential risks, and (b) the amount of such resources, in dollars and in brainpower.
I agree with ciphergoth that we would probably have an easier time discussing political issues than some other communities, and I agree with HalFinney that it's probably not a very good use of our time anyway. Let's say that everyone on LessWrong agrees on a solution to some political problem. So what? We already have lots of good ideas no one will listen to. It doesn't take a long-time reader of Overcoming Bias to realize marijuana criminalization isn't working so well, but so far the efforts of groups with far more resources than ourselves have been most...
"We already have lots of good ideas no one will listen to."
this is my primary thought on all such sentiments. the best thing for people here to do would probably be to stop worrying about altruism and start trying to get rich. Once you're rich your altruism will actually mean something.
Most of you are rich by historical standards, and by the standards of the world. So think carefully about just how "rich" will be "enough" to "actually mean something."
what traps can we defuse in advance?
We care about saving the world and we care about the truth, so sometimes we start caring too much about the ideas that we think represent those things. How can we foster detachment? How can we encourage people to consider an idea even if they don't like it, and then encourage people to relinquish an idea after it's been considered and evenly rejected?
The following paradigm has worked for me:
It's natural to be afraid of considering an idea that we know is false. Thus it is useful to occasionally practice considering id...
Survey: are you motivated to improve or save the world?
This survey aims to determine if there is significant consensus or disparity. It is in response to the datapoint presented here.
If you would like to qualify or explain your response, feel free to do so as a comment to the appropriate response.
Note that this is a general solution to the problem of conducting a quick off-the-cuff survey on LW without affecting karma, but you need to be able to view negative scoring comments.
If you want to leave me with positive karma, please keep the survey neutral a
Here's Wikipedia's list of Forbidden Words, which I think has some good examples of how language can be subtly loaded on controversial / emotionally charged issues. Diligently watching out for that sort of thing is probably one of the best things we could do to avoid political discussions degenerating.
I think it will be very necessary to carefully frame what it would be that we might wish to accomplish as a group, and what not. I say this because I'm one of those who thinks that humanity has less than a 50% chance of surviving the next 100 years, but I have no interest in trying to avert this. I am very much in favour of humanity evolving into something a lot more rational than what it is now, and I don't really see how one can justify saying that such a race would still be 'humanity'. On the other hand, if the worry is the extinction of all rational th...
If politicians start following expected utility consequentialism, special interest groups will be able to exploit the system by manufacturing in themselves "offense" (extreme emotional disutility) at unfavored measures, forcing your maximizer to give in to their demands. To avoid this, you need a procedure for distinguishing "warranted" offense from "unwarranted" offense: some baseline of personal rights ultimately derived from something other than self-assessed emotional utility.
If you see a way around this difficulty, let m...
Come on, Ciphergoth, the problem of saving humanity would be too easy if you could convince a large number of humans to go along with your proposals! You have a harder challenge: save humanity in spite of the apathy, and in many cases intransigent opposition, of the humans.
I have a hard time believing that anyone in power is serious about saving humanity. There are so many obvious and easy things that could be done, that would clearly be enormously helpful, that no one with power is doing or even suggesting. Politics is almost entirely a signalling game.
A...
I will admit to an estimate higher than 95% that humanity or its uploads will survive the next hundred years. Many of the "apocalyptic" scenarios people are concerned about seem unlikely to wipe out all of humanity; so long as we have a breeding population, we can recover.
My impression is that the material covered on OB/LW is more than sufficient to allow people that really understand the material to talk politics without exploding. I don't think we need any politics specific tricks for those that are likely to be helpful contributors.
This came up in the Santa Barbara LW meetup, and I felt like that group could have talked politics the right way. The implicit consensus seemed to be "Yeah, it'd probably work", though we didn't try.
Of course, with a smaller group and stronger selection pressures it is less likely to...
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening.
And you say that like it is a bad thing! The possibility of creating just such a utopia sounds like a damn good motivating influence for concerted altruistic effort and existintial risk mitigation to me!
I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it.
For the record I would put it at levels overwhelmingly higher than 95%. More like 99.999%.
One observation and a related suggestion:
(1) We've gone off-topic regarding the demands of this post. Ciphergoth asks what traps can we defuse in advance, before we start to talk about specific ideas to do with what one does in order to change the world. However, I'm neutral about not following instructions -- perhaps Ciphergoph hasn't asked the right question after all, and we need to triangulate towards the right question.
(2) I've got no idea how to begin answering some of the other problems that are being posed. (E.g., how can we best help the world?) S...
Richard Posner on the economics of the flu epidemic:
We need an overall "catastrophe budget" that would match expenditures to the net expected benefits of particular measures targeted at particular catastrophic threats.
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening.
Sounds like good work if you can get it. ;-)
More seriously, though, if you can't handle the getting out of bed part, it seems like taking on much bigger tasks might be off the agenda. And if more people were getting laid in the evening, we might have less violent conflict in the world.
But I'm de...
So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make.
This sentence smuggles in the assumption that we are in a position to reduce existential risk.
Two big risk are global warming and nuclear war.
The projections for large changes in climate depend on continuing growth in wealth and population in order to get the high levels of carbon dioxide emission needed to create the change. If it really goes horribly w...
Can we talk about changing the world? Or saving the world?
I think few here would give an estimate higher than 95% for the probability that humanity will survive the next 100 years; plenty might put a figure less than 50% on it. So if you place any non-negligible value on future generations whose existence is threatened, reducing existential risk has to be the best possible contribution to humanity you are in a position to make. Given that existential risk is also one of the major themes of Overcoming Bias and of Eliezer's work, it's striking that we don't talk about it more here.
One reason of course was the bar until yesterday on talking about artificial general intelligence; another factor are the many who state in terms that they are not concerned about their contribution to humanity. But I think a third is that many of the things we might do to address existential risk, or other issues of concern to all humanity, get us into politics, and we've all had too much of a certain kind of argument about politics online that gets into a stale rehashing of talking points and point scoring.
If we here can't do better than that, then this whole rationality discussion we've been having comes to no more than how we can best get out of bed in the morning, solve a puzzle set by a powerful superintelligence in the afternoon, and get laid in the evening. How can we use what we discuss here to be able to talk about politics without spiralling down the plughole?
I think it will help in several ways that we are a largely community of materialists and expected utility consequentialists. For a start, we are freed from the concept of "deserving" that dogs political arguments on inequality, on human rights, on criminal sentencing and so many other issues; while I can imagine a consequentialism that valued the "deserving" more than the "undeserving", I don't get the impression that's a popular position among materialists because of the Phineas Gage problem. We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today's unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
However, I'm more confident of the need to talk about this question than I am of my own answers. There's very little we can do about existential risk that doesn't have to do with changing the decisions made by public servants, businesses, and/or large numbers of people, and all of these activities get us straight into the world of politics, as well as the world of going out and changing minds. There has to be a way for rationalists to talk about it and actually make a difference. Before we start to talk about specific ideas to do with what one does in order to change or save the world, what traps can we defuse in advance?