billswift comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
Fairly low. But that's because I don't think the first AIs are likely to be built by people trying to guarantee Friendliness. If a FriendlyAI proponent tries to rush to get done before another team could finish it could be a much bigger risk.
OK.
For my part, if I think about things people might do that might cause a powerful AI to feel threatened and thereby have significantly bad results, FAI theory and implementation not only doesn't float to the top of the list, it's hardly even visible in the hypothesis space (unless, as here, I privilege it inordinately by artificially priming it).