shminux comments on Breakdown of existential risks - Less Wrong

16 Post author: Stuart_Armstrong 23 November 2012 02:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: shminux 23 November 2012 03:51:12PM 2 points [-]

Why does superintelligence require global coordination? Apparently all one needs to do is to develop an FAI, and the rest will take care of itself.

Comment author: Kaj_Sotala 23 November 2012 04:11:25PM 5 points [-]

E.g. AI regulation (like most technology regulation) is only effective if you get the whole world on board, and without global coordination there's the potential for arms races.

"Only develop an FAI" also presumes a hard takeoff, and it's not exactly established beyond all doubt that we'll have one.

Comment author: Stuart_Armstrong 23 November 2012 07:59:01PM 3 points [-]

Preventing UFAI or dealing safely with Oracles or using reduced impact AIs requires global coordination. Only the "FAI in a basement" approach doesn't.

Comment author: MattMahoney 23 November 2012 06:26:17PM -1 points [-]

Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.

Comment author: JoshuaZ 23 November 2012 06:45:40PM 1 point [-]

Because FAI is a hard problem.

Almost all of these are hard problems. That seems insufficient.