I've done some similar analysis on this question myself in the past, and I am running a long-term N=1 experiment by opting not to take the attitude of belief toward anything at all. Substituting words like prefer, anticipate, suspect, has worked just fine for me and removes the commitment and brittleness of thought associated with holding beliefs.
Also in looking into these questions, I learned that other languages do not have in one word the same set of disparate meanings (polysemy) of our word belief. In particular, the way we use it in American English to "hedge" (i.e., meaning "I think but I am not sure") is not a typical usage and my recollection (possibly flawed) is that it isn't in British English either.
>> I’ve been trying to understand and express why I find natural language alignment ... so much more promising >> than any other alignment techniques I’ve found.
Could it be that we humans have millennia of experience aligning our new humans (children) using this method? Whereas every other method is entirely new to us, and has never been applied to a GI even if it has been tested on other AI systems; thus, predictions of outcomes are speculative.
But it still seems like there is something missing from specifying goals directly via expression thr...
This is probably the wrong place to respond to the notion of incommensurable ontologies. Oh well, sorry.
While I agree that if an agent has a thoroughly incommensurable ontology, alignment is impossible (or perhaps even meaningless or incoherent), it also means that the agent has no access whatsoever to human science. If it can't understand what we want, it also can't understand what we've accomplished. To be more concrete, it will not understand electrons from any of our books, because it won't understand our books. It won't understand our equations,... (read more)