All of David Jilk's Comments + Replies

David Jilk1-4

This is probably the wrong place to respond to the notion of incommensurable ontologies. Oh well, sorry.

While I agree that if an agent has a thoroughly incommensurable ontology, alignment is impossible (or perhaps even meaningless or incoherent), it also means that the agent has no access whatsoever to human science. If it can't understand what we want, it also can't understand what we've accomplished.  To be more concrete, it will not understand electrons from any of our books, because it won't understand our books. It won't understand our equations,... (read more)

I've done some similar analysis on this question myself in the past, and I am running a long-term N=1 experiment by opting not to take the attitude of belief toward anything at all. Substituting words like prefer, anticipate, suspect, has worked just fine for me and removes the commitment and brittleness of thought associated with holding beliefs.

Also in looking into these questions, I learned that other languages do not have in one word the same set of disparate meanings (polysemy) of our word belief.  In particular, the way we use it in American English to "hedge" (i.e., meaning  "I think but I am not sure") is not a typical usage and my recollection (possibly flawed) is that it isn't in British English either.

>> I’ve been trying to understand and express why I find natural language alignment ... so much more promising >> than any other alignment techniques I’ve found.

Could it be that we humans have millennia of experience aligning our new humans (children) using this method? Whereas every other method is entirely new to us, and has never been applied to a GI even if it has been tested on other AI systems; thus, predictions of outcomes are speculative.

But it still seems like there is something missing from specifying goals directly via expression thr... (read more)

6Seth Herd
I wouldn't say this is the method we use to align children, for the reaon you point out: we can't set the motivational valence of the goals we suggest. So I'd call that "'goal suggestion". The difference in this method is that we are setting the goal value of that representation directly, editing the AGIs weights to do this in a way we can't with children. It would be like when I say "it's bad to hit people" I also set the weights into and through the amygdala so that the concept he represents, hitting people, is tied to a very negative reward prediction. That steers his actions away from hitting people. By selecting a representation, then editing how it connects to a steering subsystem (like the human dopamine system), we are selecting it as a goal directly, not just suggesting it and allowing the system to set its own valance (goal/avoidance marker) for that representation, as we do with human children.