How can we ensure that a Friendly AI team will be sane enough? — LessWrong