billswift comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong

8 Post author: ciphergoth 11 May 2012 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: billswift 11 May 2012 01:22:25PM 0 points [-]

Fairly low. But that's because I don't think the first AIs are likely to be built by people trying to guarantee Friendliness. If a FriendlyAI proponent tries to rush to get done before another team could finish it could be a much bigger risk.

Comment author: TheOtherDave 11 May 2012 01:30:52PM 1 point [-]

OK.

For my part, if I think about things people might do that might cause a powerful AI to feel threatened and thereby have significantly bad results, FAI theory and implementation not only doesn't float to the top of the list, it's hardly even visible in the hypothesis space (unless, as here, I privilege it inordinately by artificially priming it).