private_messaging comments on Brainstorming additional AI risk reduction ideas - Less Wrong

12 Post author: John_Maxwell_IV 14 June 2012 07:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 15 June 2012 08:17:40AM *  0 points [-]

For example, the research into a: how to make AI relate it's computational structure to the substrate (AIXI does not, and fails to self preserve), b: how to prevent wireheading for AI that does relate it's computational structure to the substrate, and c: how to define real world goals for AI to pursue (currently the AIs are just mathematics that makes some abstract variables satisfy abstract properties that may be described in the real world terms in the annotations in the papers but implement no correspondence to the real world).

Such research is clearly dangerous, and also unnecessary for creation of practically useful AIs (so it is not done at large; perhaps it is only done by SI in which case persuading grantmaking organizations not to give any money to SI may do the trick)