Interesting post, which I would like to make some comments to.
First of all, I don't think it's necessarily a bad thing to be associated with the logical positivists. In their days, they were, in my view, one of the most interesting proponents of the scientific world-view. The fact that their program (which was unusually well specified, for being a philosophical program - somethinghat contributed to its demise, since it made it easier to falsify) ultimately was shown to be unviable does not show that their general outlook was mistaken. Verificationism and the notion that philosophy should construct a scientific world-view can still be good ideas (in fact are good ideas, in my view) even though the logical positivists' more specific ideas were misguided.
Secondly, Yudowsky is right that unverifiable statements are not meaningless in the same sense as true nonsense is meaningless. In his essay "Positivism Against Hegelianism", Ernest Gellner makes the same point:
"The logical positivist definition of meaning was inevitably somewhat confused. Clearly, though, it could not define 'meaning' in the sense used by working linguists as he classes of sound patterns which are emitted, recognised and socially accepted in a given speech community. By such a criterion, 'metaphysical' [i,e. unverifiable - my note] statements patently would be meaningful. The anti-Platonism of paradigmatic logical positivism equally prevents us from interpreting the delimitation of meaning as the characterisation of a given essence of 'meaning', as for them there are no such essences (thiugh in some semi-conscious manner, and in disharmony with their nominal anti-Platonism, I strongly suspect that this was precisely what many of them did mean).
The only thing which in effect they could mean, plausibly and in harmony with their other principles, was this: the definition circumscribed, not the de facto custom of any one or every linguistic community, but the limits of the kind of use of speech which deserves respect and commendation. It was a definition not of meaningful speech, but of commendable, good speech. Their verificationism was a covert piece of ethics. Meaningless was a condemnation, and meaning a commendation." (Gellner, Relativism and the Social Sciences, pp. 30-31)
As I argue in my article "Ernest Gellner's Use of the Social Sciences in Philosophy" (Phil of Soc Sci, 2014:1), this interpretation is not quite right, though. Gellner's interpretation is over-charitable - the logical positivists really did saw their assertion that unverifiable statements are meaningless as descriptive (even though Gellner is right that they at some level intended it to be normative). More importantly, so did their chief critic Quine who dealt an important blow to logical positivism by showing that that the logical positivists' conception of meaning failed to illuminate how language actually works (in "Two Dogmas of Empiricism"). He subsequently argued that it should be replaced by his own notion of stimulus meaning - a behaviouristic notion which he held to be empirically acceptable, unlike the logical positivists' notion of meaning. (Word and Object)
The logical positivist notion of meaning was, in short, not empirically grounded in any way. They just asserted that some statements are meaningless, and some are not, while producing little argument for it - and certainly no empirical evidence. My guess is that there is an important lesson to be learnt here. For all their talk of a scientific world-view, the logical positivists were rather influenced by the German tradition of a priori philosophy (e.g. Carnap was influenced by neo-Kantianism). Also, there was a strong anti-psychologistic trend in early 20th century philosophy, inherited from the 19th century (especially Frege: for an excellent overview over why psychology was severed from philosophy in Germany, read Martin Kusch's Psychologism: A Case-Study in the Sociology of Philosophical Knowledge, where it is convincingly argued that this happened for social, non-rational reasons.
Naturalistic philosophers have for centuries tried to make philosophy more empirical and more based on the sciences, but although some progress is done, it seems that it seldom goes far enough. E.g. the later Wittgenstein – in many ways a naturalistic philosopher - argued that philosophers should "not think, but look", and that we should look upon language in an "anthropological way", seeing how it really works (rather than constructing a priori models, as philosophers often had done). Still he did no empirical investigations himself. Likewise the logical positivists venerated science but used a non-empirical notion of meaning.
There might be several reasons for this, but the most important one seems to me be that philosophers are more or less exclusively trained at a priori-reasoning and don't really have a lot of other useful knowledge - certainly not cutting-edge knowledge. In order to make philosophy thoroughly naturalistic, philosophers must - as has been argued on this site - be extensively trained especially in cognitive psychology (which I hold to be the empirical discipline most useful to philosophers) but also as far as possible (and depending on specialization) in other disciplines.
Lastly I would like to add that Karl Popper’s famous falsificationism was probably closer to Yudowsky’s thinking, since Popper did not see falsifiability as a criterion of meaning, but rather as a criterion of whether a theory should be seen as scientific. Popper was, though, much more positive to metaphysics (e.g. he was a realist) than the logical positivists, and I’m not sure if Yudowsky would like to follow him on that point.
Followup to: Making Beliefs Pay Rent, Belief in the Implied Invisible
Degrees of Freedom accuses me of reinventing logical positivism, badly:
Logical positivists were best known for their verificationism: the idea that a belief is defined in terms of the experimental predictions that it makes. Not just tested, not just confirmed, not just justified by experiment, but actually defined as a set of allowable experimental results. An idea unconfirmable by experiment is not just probably wrong, but necessarily meaningless.
I would disagree, and exhibit logical positivism as another case in point of "mistaking the surface of rationality for its substance".
Consider the hypothesis:
I would say that this hypothesis is meaningful and almost certainly false. Not that it is "meaningless". Even though I cannot think of any possible experimental test that would discriminate between its being true, and its being false.
On the other hand, if some postmodernist literature professor tells me that Shakespeare shows signs of "post-colonial alienation", the burden of proof is on him to show that this statement means anything, before we can talk about its being true or false.
I think the two main probability-theoretic concepts here are Minimum Message Length and directed causal graphs - both of which came along well after logical positivism.
By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events. By talking about atoms, I can compress the description of the chemical reactions I've observed.
We build up a vast network of unseen causes, standing behind the surface of our final sensory experiences. Even when you can measure something "directly" using a scientific instrument, like a voltmeter, there is still a step of this sort in inferring the presence of this "voltage" stuff from the visible twitching of a dial. (For that matter, there's a step in inferring the existence of the dial from your visual experience of the dial; the dial is the cause of your visual experience.)
I know what the Sun is; it is the cause of my experience of the Sun. I can fairly readily tell, by looking at any individual object, whether it is the Sun or not. I am told that the Sun is of considerable spatial extent, and far away from Earth; I have not verified this myself, but I have some idea of how I would go about doing so, given precise telescopes located a distance apart from each other. I know what "chocolate cake" is; it is the stable category containing the many individual transient entities that have been the causes of my experience of chocolate cake. It is not generally a problem for me to determine what is a chocolate cake, and what is not. Time I define in terms of clocks.
Bringing together the meaningful general concepts of Sun, space, time, and chocolate cake - all of which I can individually relate to various specific experiences - I arrive at the meaningful specific assertion, "A chocolate cake in the center of the Sun at 12am 8/8/1". I cannot relate this assertion to any specific experience. But from general beliefs about the probability of such entities, backed up by other specific experiences, I assign a high probability that this assertion is false.
See also, "Belief in the Implied Invisible". Not every untestable assertion is false; a deductive consequence of general statements of high probability must itself have probability at least as high. So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further experimental consequences for me.
If logical positivism / verificationism were true, then the assertion of the spaceship's continued existence would be necessarily meaningless, because it has no experimental consequences distinct from its nonexistence. I don't see how this is compatible with a correspondence theory of truth.
On the other hand, if you have a whole general concept like "post-colonial alienation", which does not have specifications bound to any specific experience, you may just have a little bunch of arrows off on the side of your causal graph, not bound to anything at all; and these may well be meaningless.
Sometimes, when you can't find any experimental way to test a belief, it is meaningless; and the rationalist must say "It is meaningless." Sometimes this happens; often, indeed. But to go from here to, "The meaning of any specific assertion is entirely defined in terms of its experimental distinctions", is to mistake a surface happening for a universal rule. The modern formulation of probability theory talks a great deal about the unseen causes of the data, and factors out these causes as separate entities and makes statements specifically about them.
To be unable to produce an experiential distinction from a belief, is usually a bad sign - but it does not always prove that the belief is meaningless. A great many untestable beliefs are not meaningless; they are meaningful, just almost certainly false: They talk about general concepts already linked to experience, like Suns and chocolate cake, and general frameworks for combining them, like space and time. New instances of the concepts are asserted to be arranged in such a way as to produce no new experiences (chocolate cake suddenly forms in the center of the Sun, then dissolves). But without that specific supporting evidence, the prior probability is likely to come out pretty damn small - at least if the untestable statement is at all exceptional.
If "chocolate cake in the center of the Sun" is untestable, then its alternative, "hydrogen, helium, and some other stuff, in the center of the Sun at 12am on 8/8/1", would also seem to be "untestable": hydrogen-helium on 8/8/1 cannot be experientially discriminated against the alternative hypothesis of chocolate cake. But the hydrogen-helium assertion is a deductive consequence of general beliefs themselves well-supported by experience. It is meaningful, untestable (against certain particular alternatives), and probably true.
I don't think our discourse about the causes of experience has to treat them strictly in terms of experience. That would make discussion of an electron a very tedious affair. The whole point of talking about causes is that they can be simpler than direct descriptions of experience.
Having specific beliefs you can't verify is a bad sign, but, just because it is a bad sign, does not mean that we have to reformulate our whole epistemology to make it impossible. To paraphrase Flon's Axiom, "There does not now, nor will there ever, exist an epistemology in which it is the least bit difficult to formulate stupid beliefs."