Wei_Dai comments on The AI design space near the FAI [draft] - Less Wrong

3 Post author: Dmytry 18 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 29 March 2012 03:15:49PM 2 points [-]

I'm starting to suspect that AGI might require decision theoretic insights about reflection in order to be truly dangerous

Another way in which decision theoretic insights may be harmful is if they increase the sophistication of UFAI and allow them to control less sophisticated AGIs in other universes.

They seem to be intent on laying the groundwork for the ennead.

I'm trying to avoid being too confrontational, which might backfire, or I might be wrong myself. It seems safer to just push them to be more strategic and either see the danger themselves or explain why it's a good idea despite the dangers.