I'm not sure what points you'll end up making (wrt point #2), but I just want to state that this approach makes sense to me. It's often productive to think through simple cases, or cases where our data are better, and then try to think explicitly about whether the lessons learned there should carry over to other cases that are in some ways more difficult to analyze.
In philosophy, a common saying is "It's best not to try to do philosophy all at once." Very much in the spirit of this comment and Jonah's point #3.
I'm not sure what points you'll end up making (wrt point #2)
Some of the ones I had in mind are those that are in my prior posts. But more to follow :-)
I tend to think that anything that increases the standard of living and health of people in the third world probably does reduce some forms of existential risk, by reducing the risk of war, and reducing the number of people who would consider extreme forms of terrorism involving weapons of mass destruction, including those that may create a possible existential risk like genetically engineered bioweapons. People who see their health improving and their standard of living rising are much less likely to resort to such extreme measures.
Terrorists don't tend to be very poor.
I think the contribution that lessening poverty would make to dealing with existential risk is that there are presumably some very talented people (or potentially talented people who will only become so if they get enough food when young) who are blocked by poverty from doing the work to reduce existential risk.
Well, it's not just about poverty per se. It's really about the question of "is my society getting better" vs "is my society getting worse". People who think that everything is getting better and that the future is looking like it's going to be better then the past tend to go for ideas like "progress", "incremental change", ect. On the other hand, if it looks like your society is getting worse around you, you are more likely to be drawn to desperate measures.
Terrorist attacks or bloody revolutions usually happen when people have a sense that things should be getting better, but they're getting worse instead.
The real goal here is to try to reduce the existential risk caused by the fact that it's going to be easier and easier to make potentially extension-causing weapons. Now it takes a full superpower to create enough nuclear weapons to pose an existential risk, but in the near future, a small nation should be able to create weapons that are even more dangerous; and not long after that, an even smaller group will be able to. Now, if most of the world is made up of peaceful, prosperous democracies by the time we get to that point, then I think our odds of avoiding catastrophe are much higher.
My recent posts Robustness of Cost-Effectiveness and Effective Philanthropy and Earning to Give vs. Altruistic Career Choice Revisited concern optimal philanthropy, and I’ll be writing more posts about optimal philanthropy in the near future.
My use of examples from prosaic domains such as global health has given rise to some confusion, because some members of the Less Wrong community believe that existential risk reduction is by far the best target for optimal philanthropy, and also believe that effective philanthropy in the context of global health is very disanalogous to effective philanthropy in the context of x-risk reduction. For example, Eliezer wrote
I believe that studying the issues surrounding philanthropic opportunities in areas such as global health is in fact helpful for better understanding how to assess x-risk reduction opportunities. My reasons for thinking this don’t fit into a few sentences, and fully understanding them requires understanding some of my thoughts about more prosaic domains. So I’ll respond to Eliezer’s comment at a later date.
For now, I’ll just remark:
Note: I formerly worked as a research analyst at GiveWell. All views expressed are my own.