jmmcd comments on Three Approaches to "Friendliness" - Less Wrong

14 Post author: Wei_Dai 17 July 2013 07:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: jmmcd 17 July 2013 03:30:45PM 0 points [-]

Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the "take over the universe" step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?

Comment author: Baughn 17 July 2013 05:30:46PM *  1 point [-]

Depends. Would this allow someone else to move outside its defined sphere of influence and build an AI that doesn't wait?

If the AI isn't taking over the universe, that might leave the option open that something else will. If it doesn't control humanity, chances are that will be another human-originated AI. If it does control humanity, why are we waiting?