x
Trapping AIs via utility indifference — LessWrong