Would it be correct to say you mean "should" in the wishful thinking sense of "we really want this outcome," rather than something normative or probabilistic?
Good question. The answer's yes, but now I'm wondering whether we really should expect alien-built AIs to be cooperators. I know Eliezer thinks we should.
This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.