John Baez's This Week's Finds (Week 311) [Part 1; added for convenience following Nancy Lebovitz's comment]
John Baez's This Week's Finds (Week 312)
John Baez's This Week's Finds (Week 313)
I really like Eliezer's response to John Baez's last question in Week 313 about environmentalism vs. AI risks. I think it satisfactorily deflects much of the concern that I had when I wrote The Importance of Self-Doubt.
Eliezer says
Anyway: In terms of expected utility maximization, even large probabilities of jumping the interval between a universe-history in which 95% of existing biological species survive Earth’s 21st century, versus a universe-history where 80% of species survive, are just about impossible to trade off against tiny probabilities of jumping the interval between interesting universe-histories, versus boring ones where intelligent life goes extinct, or the wrong sort of AI self-improves.
This is true as stated but ignores an important issue which is there is feedback between more mundane current events and the eventual potential extinction of the humane race. For example, the United States' involvement in Libya has a (small) influence on existential risk (I don't have an opinion as to what sort). Any impact on human society impact due to global warming has some influence on existential risk.
Eliezer's points about comparative advantage and of existential risk in principle dominating all other considerations are valid, important, and well-made, but passing from principle to practice is very murky in the complex human world that we live in.
Note also the points that I make in Friendly AI Research and Taskification.
I find the organization of the sequences difficult and frustrating. It's just hard to go through them in an organized manner. This has left me tempted to wait for Eliezer's book, but I don't know if that's a long way off, or if it's nearer in the future, or if i will contain everything that's of value of the sequences or will follow a narrower theme.
However, I have gone through enough of the sequences that I can usually follow along on new posts without too much trouble. The great part is that whenever someone makes an error that can be corrected by a post from the sequences, it quickly gets linked to by a senior community member.
ETA: This actually leads me to an idea: perhaps we could try to identify the most important posts on LW by looking at the number of times they get linked in other discussions.