You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

antigonus comments on Writing feedback requested: activists should pursue a positive Singularity - Less Wrong Discussion

3 Post author: michaelcurzi 16 November 2011 09:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: antigonus 16 November 2011 11:42:02PM 6 points [-]

If I'm reading correctly, the argument you appear to present in your paper is:

  1. We (Thomas Pogge) want to end poverty.
  2. An AI could end poverty.
  3. Therefore, we should build an AI.

This isn't a strong argument. Probably Pogge thinks that ending poverty is perfectly feasible without building AI, so if you want to change his mind, you need to show that an AI solution can likely be implemented faster than a non-AI one in addition to being sufficiently safe.

It seems like your paper just sets out to establish that there might be some strong arguments for Singularity activism as a response to global poverty somewhere in the vicinity without trying very hard to spell them out.

Comment author: michaelcurzi 17 November 2011 01:20:05AM 2 points [-]

Thanks for the feedback - I appreciate it.

I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid (which has a lot of downsides) for ending poverty. More generally, I want to show that AI dominates other strategies of moral action because of its tremendous scope, despite a) its uncertainty, b) focus on future people, and c) risks of bad consequences.

Your charge of vagueness is worth considering as well, though perhaps I'll just need to apply it to future writing. I'll get back to work. Thanks again.

Comment author: antigonus 17 November 2011 07:33:35AM 2 points [-]

I guess I'm just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you're trying to lobby the burden of proof tennis ball to Pogge's court: AI "might" turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we're uncharitable for 30 years) he assents to, so it's Pogge's job to show that AI is probably not like that scenario.

Comment author: michaelcurzi 18 November 2011 12:10:10AM *  1 point [-]

Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity - hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers' paper and lukeprog's site.

If I can do one thing with the paper, I'd just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it's just by browsing singinst.org.

Thanks.

Comment author: Logos01 17 November 2011 12:17:23PM 0 points [-]

I was actually trying for a stronger claim - that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid

Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it's not as worthy a goal as AGI?

Comment author: wedrifid 17 November 2011 03:39:30AM 0 points [-]

An AI could end poverty.

Would seems to be the word that is necessary there!