In an AI building project, wouldn't it make sense to build something that, instead of "maximizing expected utility", tries to "minimize expected disutility"?
The two will be mathematically equivalent when you're done, of course. But until then, wouldn't your buggy incomplete alpha builds tend to be safer?
What do you mean with the word "disutility"?
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.