I read Quine's Word and Object on vacation last week. Overall it was fine, but there were two things that might be worth quick mentions.
Quine, Supervised Learning Supremacist
One important facet of the book is Quine's picture of how humans learn language. It's not quite that Quine is a behaviorist, though the influence is overt. Fortunately I live in 2023 when we have the abstractions of supervised and unsupervised learning, because that's a much better-fitting category to stick Quine's picture in than "behaviorism."
Quine's picture of how humans learned things was all about supervised learning. The baby learns to say "mama" because the parents reward speaking behavior similar to it. Sure, there's some unsupervised learning that has to happen somewhere, but Quine basically sweeps it under the rug to get back to talking about supervised learning.
This is foundational to the book. Later arguments that trade in notions like similarity of concepts or simplicity of concepts are all relying on how concepts would be learned if we just used supervised learning. Which is a problem because (of course) humans don't actually work that way. A more accurate picture of human learning would have big impacts on the conclusions of the book, ranging from adding more subtlety to the whole radical translation question to exposing a bunch of claims about ontology as being based on preconceptions.
For the time, Quine was taking advantage of advances in theoretical sciences to say new and interesting things in philosophy. It's not that he was being dumb. This tale is really about how, in the intervening 60 years, progress in neuroscience, psychology, mathematics, and computer science has sneakily also been progress in philosophy.
Literal language of thought
Quine uses a model of cognition based on language. GOFAI before AI. Insofar as reasoning is logical, Quine treats it as consisting of logical rules (not necessarily deductive) acting on a cognitive state that corresponds to a sentence.
Sometimes caveats are raised, but then later forgotten/elided. Quine seems to say that maybe the language of thought model is only approximately correct, but it's close enough to be getting on with. Of course, that isn't true, and we should ditch the language of thought.
This is another one of those things that's both informative about the book's historical context, and would make more sense if I knew more of the history of the ideas here. Though I'm not motivated enough to get more of the historical picture by reading Carnap et al.
My headcanon is that the language of thought picture is motivating for the middle of the book, where Quine lays out a systematization of the rules of language, and then explores some ways one could make them even simpler and more regular. He associates simple, regular rules for language with a simple, regular core of cognition. If instead we modeled thought as having intermediate states that don't map neatly onto language, linguistics becomes more of a descriptive endeavor, and less of a royal road to the mind.
I read Quine's Word and Object on vacation last week. Overall it was fine, but there were two things that might be worth quick mentions.
Quine, Supervised Learning Supremacist
One important facet of the book is Quine's picture of how humans learn language. It's not quite that Quine is a behaviorist, though the influence is overt. Fortunately I live in 2023 when we have the abstractions of supervised and unsupervised learning, because that's a much better-fitting category to stick Quine's picture in than "behaviorism."
Quine's picture of how humans learned things was all about supervised learning. The baby learns to say "mama" because the parents reward speaking behavior similar to it. Sure, there's some unsupervised learning that has to happen somewhere, but Quine basically sweeps it under the rug to get back to talking about supervised learning.
This is foundational to the book. Later arguments that trade in notions like similarity of concepts or simplicity of concepts are all relying on how concepts would be learned if we just used supervised learning. Which is a problem because (of course) humans don't actually work that way. A more accurate picture of human learning would have big impacts on the conclusions of the book, ranging from adding more subtlety to the whole radical translation question to exposing a bunch of claims about ontology as being based on preconceptions.
For the time, Quine was taking advantage of advances in theoretical sciences to say new and interesting things in philosophy. It's not that he was being dumb. This tale is really about how, in the intervening 60 years, progress in neuroscience, psychology, mathematics, and computer science has sneakily also been progress in philosophy.
Literal language of thought
Quine uses a model of cognition based on language. GOFAI before AI. Insofar as reasoning is logical, Quine treats it as consisting of logical rules (not necessarily deductive) acting on a cognitive state that corresponds to a sentence.
Sometimes caveats are raised, but then later forgotten/elided. Quine seems to say that maybe the language of thought model is only approximately correct, but it's close enough to be getting on with. Of course, that isn't true, and we should ditch the language of thought.
This is another one of those things that's both informative about the book's historical context, and would make more sense if I knew more of the history of the ideas here. Though I'm not motivated enough to get more of the historical picture by reading Carnap et al.
My headcanon is that the language of thought picture is motivating for the middle of the book, where Quine lays out a systematization of the rules of language, and then explores some ways one could make them even simpler and more regular. He associates simple, regular rules for language with a simple, regular core of cognition. If instead we modeled thought as having intermediate states that don't map neatly onto language, linguistics becomes more of a descriptive endeavor, and less of a royal road to the mind.