Eugine_Nier comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 16 May 2012 05:13:47PM 3 points [-]

If there's convergence in goals, then we don't have to worry about making an AI with the wrong goals. If there's only convergence in behavior, then we do, because building an AI with the wrong goals will shift the convergent behavior in the wrong direction. So I think it makes sense for Stuart's paper to ignore acausal trading and just talk about whether there is convergence in goals.

Comment author: Eugine_Nier 17 May 2012 05:09:57AM *  3 points [-]

Not necessarily, it might destroy the earth before its goals converge.