Stuart_Armstrong comments on Reduced impact AI: no back channels - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
If I were you, I'd read Omohundro's paper http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf , possibly my critique of it http://lesswrong.com/lw/gyw/ai_prediction_case_study_5_omohundros_ai_drives/ (though that is gratuitous self-advertising!), and then figure out what you think about the arguments.
I'd say the main reason it's so counterintuitive is that this behaviour exists strongly for expected utility maximisers - and we're so unbelievably far from being that ourselves.
I've read Omohundro's paper, and while I buy the weak form of the argument, I don't buy the strong form. Or rather, I can't accept the strong form without a solid model of the algorithm/mind-design I'm looking at.
In which case we should be considering building agents that are not expected utility maximizers.