All of Fai's Comments + Replies

Fai91

but didn't seem to meaningful engage with the core concerns of AI alignment.

Yes, not directly. We didn't include any discussion of AI alignment, or even any futuristic-sounding stuff, to keep things close to the average conversation in the field of AI ethics, plus cater to the taste of our funder - two orgs at Princeton. We might write about these things in the future, or we might not.

But I argue that the paper is relevant to AI alignment because of the core claims of the paper: AI will affect the lives of (many) animals & These impacts matter ethicall... (read more)

3RobertM
Thanks for the detailed response! This seems uncontroversial to me.  I expect most people currently thinking about alignment would consider a "good outcome" to be one where the interests of all moral patients, not just humans, are represented - i.e. non-human animals (and potentially aliens).  If you have other ideas in mind that you think have significant philosophical or technical implications for alignment, I'd be very interested in even a cursory writeup, especially if they're new to the field. Yep, see above. I think total extinction is bad for animals compared to counterfactual future outcomes where their interests are represented by an aligned AI.  I don't have a strong opinion on how it compares to the current state of affairs (but, purely on first-order considerations, it might be an improvement due to factory farming). Agreed in principle, though I don't think S-risks are substantially likely. 
Fai10

Hi! I just wrote a full post in reply to this on the EA forum. (because it's long, and it's a while after this post). I probably won't post the full post on this forum so here's a link to the EA forum post.