ChristianKl comments on Stupid Questions June 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (195)
In an AI building project, wouldn't it make sense to build something that, instead of "maximizing expected utility", tries to "minimize expected disutility"?
The two will be mathematically equivalent when you're done, of course. But until then, wouldn't your buggy incomplete alpha builds tend to be safer?
What do you mean with the word "disutility"?
You might want to read the discussion chaosmage and I have been having on exactly that point. (I haven't yet got an answer that's clear to me.)
I've now tried to answer it.