Predicting the future is hard, so it’s no surprise that we occasionally miss important developments. However, several times recently, in the contexts of Covid forecasting and AI progress, I noticed that I missed some crucial feature of a development I was interested in getting right, and it felt to me...
I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and...
I recently finished a 9-post sequence on moral anti-realism over on the Effective Altruism Forum. This introduction explains my goals in writing the sequence and summarizes its main insights. A little further down, I will comment on which posts are most worth reading for readers with particular interests since I...
This post is a half-baked idea that I'm posting here in order to get feedback and further brainstorming. There seem to be some interesting parallels between epistemology and ethics. Part 1: Moral Anti-Epistemology "Anti-Epistemology" refers to bad rules of reasoning that exist not because they are useful/truth-tracking, but because they...
There have been some posts about animals lately, for instance here and here. While normative assumptions about the treatment of nonhumans played an important role in the articles and were debated at length in the comment sections, I was missing a concise summary of these arguments. This post from over...