MattMahoney comments on Breakdown of existential risks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
Why does superintelligence require global coordination? Apparently all one needs to do is to develop an FAI, and the rest will take care of itself.
Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.
Almost all of these are hard problems. That seems insufficient.