Thanks for the feedback - I appreciate it.
I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid (which has a lot of downsides) for ending poverty. More generally, I want to show that AI dominates other strategies of moral action because of its tremendous scope, despite a) its uncertainty, b) focus on future people, and c) risks of bad consequences.
Your charge of vagueness is worth considering as well, though perhaps I'll just need to apply it to future writing. I'll get back to work. Thanks again.
I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid
Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it's not as worthy a goal as AGI?
I managed to turn an essay assignment into an opportunity to write about the Singularity, and I thought I'd turn to LW for feedback on the paper. The paper is about Thomas Pogge, a German philosopher who works on institutional efforts to end poverty and is a pledger for Giving What We Can.
I offer a basic argument that he and other poverty activists should work on creating a positive Singularity, sampling liberally from well-known Less Wrong arguments. It's more academic than I would prefer, and it includes some loose talk of 'duties' (which bothers me), but for its goals, these things shouldn't be a huge problem. But maybe they are - I want to know that too.
I've already turned the assignment in, but when I make a better version, I'll send the paper to Pogge himself. I'd like to see if I can successfully introduce him to these ideas. My one conversation with him indicates that he would be open to actually changing his mind. He's clearly thought deeply about how to do good, and may simply have not been exposed to the idea of the Singularity yet.
I want feedback on all aspects of the paper - style, argumentation, clarity. Be as constructively cruel as I know only you can.
If anyone's up for it, fee free to add feedback using Track Changes and email me a copy - mjcurzi[at]wustl.edu. I obviously welcome comments on the thread as well.
You can read the paper here in various formats.
Upvotes for all. Thank you!