Typically, I saw researchers make this claim confidently in one sentence. Sometimes, it's backed by a loose analogy. [1]
This claim is cruxy. If alignment is not solvable, then the alignment community is not viable. But little is written that disambiguates and explicitly reasons through the claim.
Have you claimed that ‘AGI alignment is solvable in principle’?
If so, can you elaborate what you mean with each term? [2]
Below I'll also try specify each term, since I support research here by Sandberg & co.
- ^
Some analogies I've seen a few times (rough paraphrases):
- ‘humans are generally intelligent too, and humans can align with humans’
- 'LLMs appear to do a lot of what we want them to do, so AGI could too'
- ‘other impossible-seeming engineering problems got solved too’
- ^
E.g. what does ‘in principle’ mean? Does it assert that the problem described is solvable based on certain principles, or some model of how the world works?
The claim "alignment is solvable in principle" means "there are possible worlds where alignment is solved."
Consequently, the claim "alignment is unsolvable in principle" means "there are no possible worlds where alignment is solved."
Yup, that's roughly what I meant. However, one caveat would be that I would change "physically possible" to "metaphysically/logically possible" because I don't know if worlds with different physics could exist, whereas I am pr... (read more)