Thank you kind fellow !
So the first one is an "AGSI", and the second is an "ANSI" (general vs narrow)?
If I understand correctly... One type of alignment (required for the "AGSI") is what I'm referring to as alignment which is that it is conscious of all of our interests and tries to respect them, like a good friend, and the other is that it's narrow enough in scope that it literally just does that one thing, way better than humans could, but the scope is narrow enough that we can hopefully reason about it and have an idea that it's safe.
Alignment is kind of a confusing term if a...
Thanks for your response! Could you explain what you mean by "fully general"? Do you mean that alignment of narrow SI is possible? Or that partial alignment of general SI is good enough in some circumstance? If it's the latter could you give an example?
The same game theory that has all the players racing to improve their models in spite of ethics and safety concerns will have them getting the models to self improve if that provides an advantage.