1.) That the issue is one of intuition as well, and 2.) that we’re dealing with system 1 thinking, in the System 1/system 2 dichotomy popularized by Daniel Kahneman.
I’ve not yet read Kahneman’s book – Thinking Fast, Thinking Slow, though it’s on my “to be read someday” shelf – but I gather that System 1 is fast, intuitive and largely tacit (to use a word from Michael Polanyi) while System two is slow, deliberate, and logical.
My argument in that earlier post is that, in effect, our default notion of word meaning is that it is atomic and discrete. When words are linked in phrases, sentences, and paragraphs, it is liking beads on a thread, or freight cars in a train. The linkage is external and contingent. Without reflection, that’s just how we think about words (and meaning). That’s fine in informal discussions, but not so good in at least some technical contexts, such as large language models (LLMs).
Now, let’s take the idea that LLMs are “trained” by being asked to predict the next word. That is at least consistent with, if not actually reinforcing of, this default conceptualization of atoms-of-meaning. One can easily make predictions about the behavior of atoms. One simply observes them and notes down what they do from one moment to the next. There is no sense of “interiority.”
Whereas the idea that words are entangled with one another through their meanings, that’s all about “interiority.” Those vectors are “interior” to the token, and relate one token to another and, more generally, tokens among themselves. The idea of entanglement leads naturally to the idea of weaving, weaving a fabric of meaning. The so-called prediction procedure, then, is one of placing a word, with its 12K item vector, into the unfolding fabric of meaning. Backpropagation, in this view, is the act of fine-tuning the placement. Prediction is merely a means to an end, a device, not the point of the procedure.
To think in terms of atomic meaning is simply to gloss over all this. All of that may be implicit in the mathematics, but the atomic view of meaning stands in the way of allowing ones thought to be perspicuously guided by the mathematics. The mathematics becomes (and functions as) a secondary construction.
I note finally that traditional training in propositional and symbolic logic reinforces this atomic view of word meaning. Word meaning is reduced to variable names, Ps and Qs, having no intrinsic content whatsoever. That’s find for System 2 deliberative thinking, which is what logic was invented for. But it gets in the way of understanding how meaning works in collections of entangled, entangled what? What do we call them?
This leads to a final irony: The world of standard computer programming is close kin to that of symbolic and propositional logic, with their variables, bindings, and values. Thus the mode of thinking necessary for programming the computational engines that create LLMs, that mode of thought stands in the way of understanding how LLMs work. The AI/ML experts who create the models are thus crippled in understanding how they work. The intuitions that guide them in writing code render the operations of LLMs opaque and invisible.
This opacity thus has two aspects:
1.) the sheer complexity of the models, and 2.) conceptual intractability.
I am suggesting, then, that thinking of meaning as entailing entanglement is a way to deal with the second issue (and this may also lead to a holographic account as well, but this is a secondary issue). On the first issue, complexity, that is there regardless of your conceptual instruments. Thinking in terms of entanglement will NOT eliminate the complexity, but it may well make it tractable.
If your goal is mechanistic interpretability, then you need conceptual tools appropriate to the mechanisms you are trying to understand, no? You need to discard, or at least bracket, intuitions based on the idea of atomic-self-contained word meaning and develop intuitions that are consistent with the mathematics underlying the LLMs.
Cross-posted from New Savanna.
Two things have just occurred to me about my recent post, Word meaning and entanglement in LLMs:
I’ve not yet read Kahneman’s book – Thinking Fast, Thinking Slow, though it’s on my “to be read someday” shelf – but I gather that System 1 is fast, intuitive and largely tacit (to use a word from Michael Polanyi) while System two is slow, deliberate, and logical.
My argument in that earlier post is that, in effect, our default notion of word meaning is that it is atomic and discrete. When words are linked in phrases, sentences, and paragraphs, it is liking beads on a thread, or freight cars in a train. The linkage is external and contingent. Without reflection, that’s just how we think about words (and meaning). That’s fine in informal discussions, but not so good in at least some technical contexts, such as large language models (LLMs).
Now, let’s take the idea that LLMs are “trained” by being asked to predict the next word. That is at least consistent with, if not actually reinforcing of, this default conceptualization of atoms-of-meaning. One can easily make predictions about the behavior of atoms. One simply observes them and notes down what they do from one moment to the next. There is no sense of “interiority.”
Whereas the idea that words are entangled with one another through their meanings, that’s all about “interiority.” Those vectors are “interior” to the token, and relate one token to another and, more generally, tokens among themselves. The idea of entanglement leads naturally to the idea of weaving, weaving a fabric of meaning. The so-called prediction procedure, then, is one of placing a word, with its 12K item vector, into the unfolding fabric of meaning. Backpropagation, in this view, is the act of fine-tuning the placement. Prediction is merely a means to an end, a device, not the point of the procedure.
To think in terms of atomic meaning is simply to gloss over all this. All of that may be implicit in the mathematics, but the atomic view of meaning stands in the way of allowing ones thought to be perspicuously guided by the mathematics. The mathematics becomes (and functions as) a secondary construction.
I note finally that traditional training in propositional and symbolic logic reinforces this atomic view of word meaning. Word meaning is reduced to variable names, Ps and Qs, having no intrinsic content whatsoever. That’s find for System 2 deliberative thinking, which is what logic was invented for. But it gets in the way of understanding how meaning works in collections of entangled, entangled what? What do we call them?
This leads to a final irony: The world of standard computer programming is close kin to that of symbolic and propositional logic, with their variables, bindings, and values. Thus the mode of thinking necessary for programming the computational engines that create LLMs, that mode of thought stands in the way of understanding how LLMs work. The AI/ML experts who create the models are thus crippled in understanding how they work. The intuitions that guide them in writing code render the operations of LLMs opaque and invisible.
This opacity thus has two aspects:
I am suggesting, then, that thinking of meaning as entailing entanglement is a way to deal with the second issue (and this may also lead to a holographic account as well, but this is a secondary issue). On the first issue, complexity, that is there regardless of your conceptual instruments. Thinking in terms of entanglement will NOT eliminate the complexity, but it may well make it tractable.
If your goal is mechanistic interpretability, then you need conceptual tools appropriate to the mechanisms you are trying to understand, no? You need to discard, or at least bracket, intuitions based on the idea of atomic-self-contained word meaning and develop intuitions that are consistent with the mathematics underlying the LLMs.