Wei_Dai comments on Some Thoughts on Singularity Strategies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
The problem is that building FAI is also likely not fast enough, given that UFAI looks significantly easier than FAI. And there are additional unique downsides to attempting to build FAI: since many humans are naturally competitive, it provides additional psychological motivation for others to build AGI; unless the would-be FAI builders have near perfect secrecy and security, they will leak ideas and code to AGI builders not particularly concerned with Friendliness; the FAI builders may themselves accidentally build UFAI; it's hard to do anti-AI PR/politics (to delay UFAI) while you're trying to build an AI yourself.
ETA: Also, the difficulty of building smarter humans seems logically independent of the difficulty of building UFAI, whereas the difficulty of building FAI is surely at least as great as the difficulty of building UFAI. So it seems the likelihood that building smarter humans is fast enough is higher.
Smarter humans will see the difficulty gap between FAI and UFAI as smaller, so they'll be less motivated to "save time and effort" by not taking taking safety/Friendliness seriously. The danger of UFAI will also be more obvious to them.