michaelcurzi comments on Writing feedback requested: activists should pursue a positive Singularity - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
Thanks for the feedback - I appreciate it.
I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid (which has a lot of downsides) for ending poverty. More generally, I want to show that AI dominates other strategies of moral action because of its tremendous scope, despite a) its uncertainty, b) focus on future people, and c) risks of bad consequences.
Your charge of vagueness is worth considering as well, though perhaps I'll just need to apply it to future writing. I'll get back to work. Thanks again.
I guess I'm just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you're trying to lobby the burden of proof tennis ball to Pogge's court: AI "might" turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we're uncharitable for 30 years) he assents to, so it's Pogge's job to show that AI is probably not like that scenario.
Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity - hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers' paper and lukeprog's site.
If I can do one thing with the paper, I'd just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it's just by browsing singinst.org.
Thanks.
Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it's not as worthy a goal as AGI?