How do you deal with Gödel's finding that every system is left with some questions that can't be resolved and where the answers can't be verified?
Okay, circling around on this to maybe say something more constructive now that I've thought about it a bit.
Part of the problem is that your central thesis is not very clear on first read, so I had to think about it a bit to really get what your "big idea" or ideas are that motivate this post. I realize you say right up top that you believe a strong version of verificationism is correct, but to me that's not really getting at the core of what you're thinking, that's just something worked out by other people you can point at and say "something in that direction seems right".
(FWIW, even some of the people who came up with logical positivism and related ideas like verificationism eventually worked themselves into a corner and realized there was no way out and the whole thing fell apart. The arguments for why it doesn't work eventually get pretty subtle if you really press the issue, and I doubt I could do them justice, so I'll stay higher level and may not have the time and energy to address every objection that you could bring up, but basically there's 50+ years of literature trying to make ideas like this work and then finding there were inevitably problems.)
So, as best I can tell, ...
On a deductive level, verificationism is self-defeating; if it's true then it's meaningless. On an inductive level, I've found it to be a good rule of thumb for determining which controversies are likely to be resolvable and which are likely to go nowhere.
General assessment: valid critiques but then you go and make your own metaphysical claims in exactly the opposite direction, missing the point of your own analysis.
Huh, upon reflection I can't figure out a good way to define reality without referring to subjective experience. I might not go so far as to say it's not a coherent concept, but you raise some interesting points.
Thanks for writing up an excellent Reduction Ad Absurdum of verificationism. As they say, "One man's modus ponens is another man's modus tollens".
I strongly agree with the claim that it is self-defeating. Here's another weird effect - let's suppose I roll a dice and see that it is a 6. I then erase the information from my brain, which then takes us to a position where the statement is impossible to verify. Does the statement then become meaningless?
Beyond this, I would say that "exists" is a primitive. If it makes sense to take anything as a primitive, the...
My concept of a meaningless claim is a claim that can be substituted for any alternative without any change to anticipated experience. For example, the claim 'Photon does not exist after reaching the Event Horizon' can be substituted for the claim 'Photon exists after crossing the Event Horizon' without bringing any change to anticipated experience. Thus, it is not rational to believe in any of the alternatives. What is your practical definition of 'meaningless'?
It seems like you are both arguing that reality exist is false and that it's meaningless. It's worth keeping those apart.
When it comes to meaningness of terms EY wrote a more about truth then reality. He defends the usefulness of the term truth by asking
If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?
He finds that he has good reasons to answer yes. In a similar regard it might be useful to tell an AI that has a notion that there's a reality outside of ...
A couple thoughts:
I think of explanations as being prior to predictions. The goal of (epistemic) rationality, for me, is not to accurately predict what future experiences I will have. It’s to come up with the best model of reality that includes the experiences I’m having right now.
I’ve even lately come to be skeptical of the notion of anticipated experience. In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things. There are substitute notions that play the role ...
side note (not addressing the main point)
it's not like there's a great altruism argument if only we conceded that verificationism is wrong
altruism is a value, not a belief. do you think there's no great argument for why it's possible in principle to configure a mind to be altruistic?
Yeah. This post could also serve, more or less verbatim, as a write-up of my own current thoughts on the matter. In particular, this section really nails it:
...As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.
[...]
I don't suppose that. I
So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”
Here, reality is merely a convenient term to use, which helps conceptualize errors in the map.
No, the point of the argument for realism, in general, is that it explains how prediction, in general, is possible.
That's different from saying that the predictive ability of a specific theory is good evidence for the ontological accuracy of a specific theory.
So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.”
Here, reality is merely a convenient term to use, which helps conceptualize errors in the map.
No, the point of the argument for realism in general is that it explains how prediction, in general, is possible.
That's different from saying that the predictive ability of a specific theory is good evidence for the ontological accuracy of a specific theory.
Is the thing you are defending verificationism, or anti-realism?
Here you say you argue for verificationism
. This post consists of two parts—first, the positive case for verificationism,
..here you argue "from* verificationism:-
One consequence of verificationism
If you are arguing for verificationism , you need to argue against main alternative theory -- that meaning is an essentially linguistic issue.
1 It’s impossible to reach that world from ours, it’s entirely causally disconnected, and
2 That world “really exists
That’s exactly what decoheren...
"Electrons exist" means I anticipate other people acting in a way that matches how they would act if their observations matched my map of how electrons function. Verbal shorthands are useful things.
Response to: Making Beliefs Pay Rent (in Anticipated Experiences), Belief in the Implied Invisible, and No Logical Positivist I
I recently decided that some form of strong verificationism is correct - that beliefs that don't constrain expectation are meaningless (with some caveats). After reaching this conclusion, I went back and read EY's posts on the topic, and found that it didn't really address the strong version of the argument. This post consists of two parts - first, the positive case for verificationism, and second, responding to EY's argument against it.
The case for Strong Verificationism
Suppose I describe a world to you. I explain how the physics works, I tell you some stories about what happens in that world. I then make the dual assertions that:
1. It's impossible to reach that world from ours, it's entirely causally disconnected, and
2. That world "really exists"
One consequence of verificationism is that, if 1 is correct, then 2 is meaningless. Why is it meaningless? For one, it's not clear what it means, and every alternative description will suffer from similarly vague terminology. I've tried, and asked several others to try, and nobody has been able to give a definition of what it means for something to "really exist" apart from expectations that actually clarifies the question.
Another way to look at this is through the map-territory distinction. "X really exists" is a map claim (i.e a claim made by the map), not a territory claim, but it's about the territory. It's a category error and meaningless.
Now, consider our world. Again, I describe its physics to you, and then assert "This really exists." If you found the above counterintuitive, this will be even worse - but I assert this latter claim is also meaningless. The belief that this world exists does not constrain expectations, above and beyond the map that doesn't contain such a belief. In other words, we can have beliefs about physics that don't entail a belief in "actual existence" - such a claim is not required for any predictions and is extraneous and meaningless.
As far as I can tell, we can do science just as well without assuming that there's a real territory out there somewhere.
Some caveats: I recognize that some critiques of verificationism relate to mathematical or logical beliefs. I'm willing to restrict the set of statements I consider incoherent to ones that make claims about what "actually exists", which avoids this problem. Also, following this paradigm, one will end up with many statements of the form "I expect to experience events based on a model containing X", and I'm ok with a colloquial usage of exist to shorten that to "X exists". But when you get into specific claims about what "really exists", I think you get into incoherency.
Response to EY sequence
In Making Beliefs Pay Rent, he asserts the opposite without argument:
He then elaborates:
I disagree with the last sentence. These beliefs are ways of saying "I expect my experiences to be consistent with my map which says g=9.8m/s^2, and also says this building is 120 meters tall". Perhaps the beliefs are a compound of the above and also "my map represents an actual world" - but as I've argued, the latter is both incoherent and not useful for predicting experiences.
In Belief in the Implied Invisible, he begins an actual argument for this position, which is continued in No Logical Positivist I. He mostly argues that such things actually exist. Note that I'm not arguing that they don't exist, but that the question of whether they exist is meaningless - so his arguments don't directly apply, but I will address them.
As above, my claim is not that the photon disappears. That would indeed be a silly idea. My claim is that the very claim that a photon "exists" is meaningless. We have a map that makes predictions. The map contains a proton, and it contains that proton even outside any areas relevant to predictions, but why should I care? The map is for making predictions, not for ontology.
Later on, he mentions Solomonoff induction, which is somewhat ironic because that is explicitly a model for prediction. Not only that, but the predictions produced with Solomonoff induction are from an average of many different machines, "containing" many different entities. The map of Solomonoff induction, in other words, contains far more entities than anyone but Max Tegmark believes in. If we're to take that seriously, then we should just agree that everything mathematically possible exists. I have much less disagreement with that claim (despite also thinking it's incoherent) than with claims that some subset of that multiverse is "real" and the rest is "unreal".
I don't suppose that. I suppose that the concept of a photon actually existing is meaningless and irrelevant to the model.
This latter belief is an "additional fact". It's more complicated than "these equations describe my expectations".
This is a tough question, if only because altruism is complicated to ground on my view - if other people's existence is meaningless, in what sense can it be good to do things that benefit other people? I suspect it all adds up to normality. Regardless, I'll note that the question applies on my view just as much to a local altruistic act, since the question of whether other people have internal experiences would be incoherent. If it adds up to normality there, which I believe it does, then it should present no problem for the spaceship question as well. I'll also note that altruism is hard to ground regardless - it's not like there's a great altruism argument if only we conceded that verificationism is wrong.
Now for No Logical Positivist I.
This is the first post that directly addresses verificationism on its own terms. He defines it in a way similar to my own view. Unfortunately, his main argument seems to be "the map is so pretty, it must reflect the territory." It's replete with map-territory confusion:
Sure, but a simpler map implies nothing about the territory.
Further on:
Sure, it's incompatible with the claim that beliefs are true if they correspond to some "actual reality" that's out there. That's not an argument for the meaning of that assertion, though, because no argument is given for this correspondence theory of truth - the link is dead, but the essay is at https://yudkowsky.net/rational/the-simple-truth/ and grounds truth with a parable about sheep. We can ground truth just as well as follows: a belief is a statement with implications as to predicted experiences, and a belief is true insofar as it corresponds to experiences that end up happening. None of this requires an additional assumption that there's an "actual reality".
Interestingly, in that post he offers a quasi-definition of "reality" that's worth addressing separately.
Here, reality is merely a convenient term to use, which helps conceptualize errors in the map. This doesn't imply that reality exists, nor that reality as a concept is coherent. I have beliefs. Sometimes these beliefs are wrong, i.e. I experience things that are inconsistent with those beliefs. On my terms, if we want to use the word reality to refer to a set of beliefs that would never result in such inconsistency, that's fine, and those beliefs would never be wrong. You could say that a particular belief "reflects reality" insofar as it's part of that set of beliefs that are never wrong. But if you wanted to say "I believe that electrons really exist", that would be meaningless - it's just "I believe that this belief is never wrong", which is just equal to "I believe this".
Moving back to the Logical Positivism post:
Again, the hydrogen-helium assertion is a feature of the map, not the territory. One could just as easily have a map that doesn't make that assertion, but has all the same predictions. The question of "which map is real" is a map-territory confusion, and meaningless.
Sure, as I mentioned above, I'm perfectly fine with colloquial discussion of claims using words like exist in order to make discussion / communication easier. But that's not at all the same as admitting that the claim that electrons "exist" is coherent, rather than a convenient shorthand to avoid adding a bunch of experiential qualifiers to each statement.