Wei_Dai comments on The AI design space near the FAI [draft] - Less Wrong

3 Post author: Dmytry 18 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 28 March 2012 11:36:49PM 4 points [-]

SingInst is still strongly associated with wanting to directly build FAI. It's a bad idea according to my best guess, and I want to avoid giving the impression that I support the idea.

I think this is a serious concern, especially as I'm starting to suspect that AGI might require decision theoretic insights about reflection in order to be truly dangerous. If my suspicion is wrong then SingInst working directly on FAI isn't that harmful marginally speaking, but if it's right then SingInst's support of decision theory research might make it one of the most dangerous institutions around.

Given that you're worried and that you're highly respected in the community, this would seem to be one of those "stop, melt, and catch fire" situations that Eliezer talks about, so I'm confused about SingInst's apparently somewhat cavalier attitude. They seem to be intent on laying the groundwork for the ennead.

Comment author: Wei_Dai 29 March 2012 03:15:49PM 2 points [-]

I'm starting to suspect that AGI might require decision theoretic insights about reflection in order to be truly dangerous

Another way in which decision theoretic insights may be harmful is if they increase the sophistication of UFAI and allow them to control less sophisticated AGIs in other universes.

They seem to be intent on laying the groundwork for the ennead.

I'm trying to avoid being too confrontational, which might backfire, or I might be wrong myself. It seems safer to just push them to be more strategic and either see the danger themselves or explain why it's a good idea despite the dangers.