timtyler comments on Some Thoughts on Singularity Strategies - Less Wrong

26 Post author: Wei_Dai 13 July 2011 02:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: timtyler 13 July 2011 08:31:05PM *  1 point [-]

My own feeling is that the chance of success of of building FAI, assuming current human intelligence distribution, is low (even if given unlimited financial resources), while the risk of unintentionally building or contributing to UFAI is high. I think I can explicate a part of my intuition this way: There must be a minimum level of intelligence below which the chances of successfully building an FAI is negligible. We humans seem at best just barely smart enough to build a superintelligent UFAI. Wouldn't it be surprising that the intelligence threshold for building UFAI and FAI turn out to be the same?

What will construct advanced intelligent machines is slightly less advanced intelligent machines, in a symbiotic relationship with humans. It doesn't much matter if the humans are genetically identical with the ones that barely managed to make flint axe heads - since they are not working on this task alone.