jsalvatier comments on Holden Karnofsky's Singularity Institute Objection 2 - Less Wrong

11 Post author: ciphergoth 11 May 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: jsalvatier 11 May 2012 06:07:55PM *  7 points [-]

At least in the original post, I don't think Holden's point is that tool-AI is much easier than agent-AI (though he seems to have intuition that it is), but that it's potentially much safer (largely from increased feedback), and thus that it deserves more investigation (and that it's a bad sign of SIAI that it's neglected this approach).

Comment author: GuySrinivasan 11 May 2012 06:30:34PM 2 points [-]

Yes, good point. The objection is about SI not addressing tool-AI, much of their discussion is about addressing tool-AI, not the meta "why isn't this explicitly called out by SI?" In particular the intuitions Holden has as responses to those questions, that we may well be able to create extremely useful general AI without creating general AI that can improve itself, do seem like they have received too little in-depth discussion here. We've often mentioned the possibility and often decided to skip the question because it's very hard to think about but I don't recall many really lucid conversations trying to ferret out what it would look like if more-than-narrow, less-than-having-the-ability-to-self-improve AI were a large enough target to reliably hit.

Comment author: jsalvatier 11 May 2012 06:36:53PM 2 points [-]

(as an aside, I think as Holden thinks of it, tool-AI could self improve, but because it's tool like and not agent-like, it would not automatically self improve. Its outputs could be of the form "I would decide to rewrite my program with code X", but humans would need to actually implement these changes.)