lukeprog comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
Clearly, this is possible. If an FAI team comes to think this is true during development, I hope they'll reconsider their plans. But can you provide, or link me to, some reasons for suspecting that p(eAI | attempt towards FAI) > p(eAI | attempt towards AGI)?
Some relevant posts/comments: