Larks comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread.

Comment author: Larks 13 March 2012 09:58:50PM *  17 points [-]

It would be helpful if you summarised the premises in a short list. At the moment one has to do a lot of scrolling.

Edit: Actually, I think it would be a very good idea; their not being writen out together makes it easy to miss the fact that they're not all necessary, some imply others, and that they basically don't cut reality at its joints. You assert that these are all necessary and logically separate premises. Yet P4 is clearly not necessary for FOOM - something not being the default outcome does not mean it will not happen. P3 implies P2, and P2 implies P1. And P5 is clearly not necessary either - FOOM could occur in a thousand years time.

And again with the second set of premises - they are clearly not distinct, and not all necessary. For example,

  • P6 - SIAI will solve FAI

is not necessary; they might succeed by preventing anyone else from developing GAI.

  • P7 SIAI does not increase risks from AI.

If you mean net, then yes. But otherwise, it's perfectly possible that they might speed up UFAI and AI, and yet still be a good thing, if the latter outweighs the former.

and

  • P9 It makes sense to support SIAI at this time

is the conclusion of the argument! This premise alone is sufficient - what are the others doing here? The only motivation I can see here is to make the conclusion seem artificially conjunctive.

Basically, as far as I can see you've chosen to split up factors so as to make the case seem more conjunctive, whilst ignoring those that make it disjunctive. Without any argument as to why the partition into overlapping, non-necessary factors you've given is the right one, this post reads more like rhetoric than analysis.

All this is quite appart from your actual arguments that these premises are unlikely.

Comment author: billswift 14 March 2012 12:11:05PM 2 points [-]

That is actually my argument against a lot of philosophy; arguments embedded in a lot of prose are unnecessarily hard to follow. Arguments, at least ones that you actually expect to be capable of changing someone's mind, should be presented as clearly and schematically as possible. Otherwise it looks a lot like "baffle them with bullshit."