Continuing the experiment from August, let's try another open thread for AI Alignment discussion. The goal is to be a place where researchers and upcoming research can ask small questions they are confused about, share early stage ideas and have lower-key discussions.
Hmm, I think I would make the further claim that in this world regular engineering practices are likely to work well, because they usually work well.
(If a single failure meant that we lose, then I wouldn't say this; so perhaps we also need to add in another claim that the first failure does not mean automatic loss. Regular engineering practices get you to high degrees of reliability, not perfect reliability.)