Yes, the meaning of a statement depends causally on empirical facts. But this doesn't imply that the truth value of "Bachelors are unmarried" depends less than completely on its meaning.
I think we are in agreement here.
My point is that if your picking of particular axioms is entangled with reality, then you are already using a map to describe some territory. And then you can just as well describe this territory more accurately.
I think the instrumental justification (like Dutch book arguments) for laws of epistemic rationality (like logic and probability) is too weak. Because in situations where there happens to be in fact no danger of being exploited by a Dutch book (because there is nobody who would do such an exploit) it is not instrumentally irrational to be epistemically irrational. But you continue to be epistemically irrational if you have e.g. incoherent beliefs.
Rationality is about systematic ways to arrive to correct map-territory correspondence. Even if in your particular situation no one is exploiting you, the fact that you are exploitable in principle is bad. But to know about what is exploitable in principle we generalize from all the individual acts of exploatation. It all has to be grounded in reality in the end.
Epistemic rationality laws being true in virtue of their meaning alone (being analytic) therefore seems a more plausible justification for epistemic rationality.
You've said yourself, meaning is downstream of experience. So in the end you have to appeal to reality while trying to justify it.
Ok, let me see if I'm understanding this correctly: if the experiment is checking the X-th digit specifically, you know that it must be a specific digit, but you don't know which, so you can't make a coherent model. So you generalize up to checking an arbitrary digit, where you know that the results are distributed evenly among {0...9}, so you can use this as your model.
Basically yes. Strictly speaking it's not just any arbitrary digit, but any digit your knowledge about values of which works the same way as about value of X.
For any digit you can execute this algorithm:
Check whether you know about it more (or less) than you know about X.
Yes: Go to the next digit
No: Add it to the probability experiment
As a result you get a bunch of digits about values of which you knew as much as you know about X. And so you can use them to estimate your credence for X
The first part about not having a coherent model sounds a lot like the frequentist idea that you can't generate a coherent probability for a coin of unknown bias - you know that it's not 1/2 but you can't decide on any specific value.
Yes. As I say in the post:
By the same logic tossing a coin is also deterministic, because if we toss the same coin exactly the same way in exactly the same conditions, the outcome is always the same. But that's not how we reason about it. Just like we've generalized coin tossing probability experiment from multiple individual coin tosses, we can generalize checking whether some previously unknown digit of pi is even or odd probability experiment from multiple individual checks about different unknown digits of pi.
The fact how a lot of Bayesians mock Frequentists for not being able to conceptualize probability of a coin of unknown fairness, and then make the exact same mistake with not being able to conceptualize probability of a specific digit of pi, which value is unknown, has always appeared quite ironic to me.
This seems equivalent to my definition of "information that would change your answer if it was different", so it looks like we converged on similar ideas?
I think we did!
I'd argue that it's physical uncertainty before the coin is flipped, but logical certainty after. After the flip, the coin's state is unknown the same way the X-th digit of pi is unknown - the answer exists and all you need to do is look for it.
That's not how people usually use these terms. The uncertainty about a state of the coin after the toss is describable within the framework of possible worlds just as uncertainty about a future coin toss, but uncertainty about a digit of pi - isn't.
Moreover, isn't it the same before the flip? It's not that coin toss is "objectively random". At the very least, the answer also exists in the future and all you need is to wait a bit for it to be revealed.
The core princinple is the same: there is in fact some value that Probability Experiment function takes in this iteration. But you don't know which. You can do some actions: look under the box, do some computation, just wait for a couple of seconds - to learn the answer. But you also can reason approximately for the state of your current uncertainty before these actions are taken.
Is there a formal way you'd define this? My first attempt is something like "information that, if it were different, would change my answer"
I'd say that the rule is: "To construct probability experiment use the minimum generalization that still allows you to model your uncertainty".
In the case with 1,253,725,569th digit of pi, if I try to construct a probability experiment consisting only of checking this paticular digit, I fail to model my uncertainty, as I don't know yet what is the value of this digit.
So instead I use a more general probability experiment of checking any digit of pi that I don't know. This allows me to account for my uncertainty.
Now, I may worry, that I overdid it and have abstracted away some relevant information, so I check:
- Does knowing that the digit in question is specifically 1,253,725,569 affects my credence?
- Not until I receive some evidence about the state of specifically 1,253,725,569th digit of pi.
- So until then this information is not relevant.
Unrelatedly, would you agree that there's not really a meaningful difference between logical and physical uncertainties?
Yes. I'm making this point here:
We can notice that coin tossing is, in fact, similar to not knowing whether some digit of pi is even or odd. There are two outcomes with an equal ratio among the iterations of probability experiment. I can use the model from coin tossing, apply it to evenness of some unknown to me digit of pi, and get a correct result. So we can generalize even further and call both of them, and any other probability experiment with the same properties as:
There is no particular need to talk about logical and physical uncertainty as different things. It's just a historical artifact of confused philosiophical approach of possible worlds and I'm presenting a better way.
logical uncertainty is where you could find the answer in principle but haven't done so; physical uncertainty is where you don't know how to find the answer.
Even this difference is not real. Consider:
A coin is tossed and put into an opaque box, without showing you the result. What is the probability that the result of this particular toss was Heads?
This is physical uncertainty. And yet I do know how to find the answer: all I need is to remove the opaque box and look. Nevertheless, I can talk about my credence before I looked at the coin.
The exact same situation goes with not knowing a particular digit of pi. Yes, I do know a way to find an answer: google an algorithm for calculating any digit of pi and insert there my digit as an input. Nevertheless, I can still talk about my credence before I performed all these actions.
As soon as you have your axioms you can indeed analytically derive theorems from them. However, the way you determine which axioms to pick, is entangled with reality. It's an especially clear case with probability theory where the development of the field was motivated by very practical concerns.
The reason why some axioms appear to us appropriate for logic of beliefs and some don't, is because we know what beliefs are from experience. We are trying to come up with a mathematical model approximating this element of reality - an intensional definition for an extensional referent that we have.
Being Dutch-bookable is considered irrational because you systematically lose your bets. Likewise, continuing to believe that a particular outcome can happen in a setting where it, in fact, can't and another agent could've already figured it out with the same limitations you have, is irrational for the same reason.
Similar to "Bachelors are unmarried".
Indeed. There is, in fact, some real world reasons why the words "bachelor" and "unmarried" have these meanings in the English language. In both "why these particular worlds for this particular meanings?" and "why these meanings deserved designating any words at all" senses. The etimology of english language and the existence of the institute of marrige in the first place, both of which the results of social dynamics of humans whose psyche has evolved in a particular way.
The truth of a statement in general is determined by two things, it's meaning and what the world is like.
I hope the previous paragraph does a good enough job showing, how meaning of a statement is, in fact, connected to the way the world is like.
Truth is a map-territory correspondence. We can separately talk about its two components: validity and soundness. As long as we simply conceptualize some mathematical model, logically pinpointing it for no particular reason, then we are simply dealing with tautologies and there is only validity. Drawing maps for the sake of drawing maps, without thinking about territory. But the moment we want our model to be about something, we encounter soundness. Which requires some connection to the outside world. And then there is a natural question of having a more accurate map and how to have it.
I think the probability axioms are a sort of "logic of sets of beliefs". If the axioms are violated the belief set seems to be irrational.
Well yes, they are. But how do you know which axioms are the correct axioms for logic of sets beliefs? How comes violation of some axioms seems to be irrational, while violation of other axioms does not? What do you even mean by "rational" if not "systematic way to arrive to map-territory correspondence"?
You see, in any case you have to ground your mathematical model in reality. Natural numbers may be logically pinpointed by arithmetical axioms, but a question of whether some action with particular objects behave like addition of natural numbers is a matter of empiricism. The reason we came up with a notion of natural numbers, in the first place, is because we've encountered a lot of stuff in reality which behavior generalizes this way. And the same things with logic of beliefs. First we encounter some territory, then we try to approximate it with a map.
What I'm trying to say is that if you are already trying to make a map that corresponds to some territory, why not make the one that corresponds better? You can declare that any consistent map is "good enough" and stop your inquiry there, but surely you can do better. You can declare that any consistent map following several simple conditions is good enough - that's a step in the right direction, but still there is a lot of place for improvement. Why not figure out the most accurate map that we can come up with?
That's a more challenging thing to formalize than just logic or probability theory.
Well, yes, it's harder than the subjective probability approach you are talking about. We are trying to pinpoint a more specific target: a probabilistic model for a particular problem, instead of just some probabilistic model.
It would amount to a theory of induction. We would need to formalize and philosophically justify at least something like Ockham's razor.
No, not really. We can do a lot before we go this particular rabbit hole. I hope my next post will make it clear enough.
Degrees of belief adhering to the probability calculus at any point in time rules out things like "Mary is a feminist and a bank teller" to simultaneously receive a higher degree of belief than "Mary is a bank teller". It also requires e.g. that if and then . That's called "probabilism" or "synchronic coherence".
What is even the motivation for it? If you are not interested in your map representing a territory, why demanding that your map is coherent?
And why not assume some completely different axioms? Surely, there is a lot of potential ways to logically pinpoint things. Why this one in particular? Why not allow
P(Mary is a feminist and bank teller) > P(Mary is a feminist)?
Why not simply remove all the limitations from the function P?
Yes, that's why I only said "less arbitrary".
I don't think I can agree even with that.
Previously we arbritrary assumed that a particular sample space correspond to a problem. Now we are arbitrary assuming that a particular set of possible worlds corresponds to a problem. In the best case we are exactly as arbitrary as before and have simply renamed our set. In the worst case we are making a lot of extra unfalsifiable assumptions about metaphysics.
You could theoretically believe to degree 0 in the propositions "the die comes up 6" or "the die lands at an angle". Or that the die comes up as both 1 and 2 with some positive probability. There is no requirement that your degrees of belief are accurate relative to some external standard. It is only assumed that the beliefs we do have compose in a way that adheres to the axioms of probability theory. E.g. P(A)≥P(A and B). Otherwise we are, presumably, irrational.
Well, technically is an axiom, so you do need a sample space if you want to adhere to the axioms.
But sure, if you do not care about accurate beliefs and systematic ways to arrive to them at all, then the question is, indeed, not interesting. Of course then it's not clear what use is probability theory for you, in the first place.
You may assume that it's the way how Albert managed to persuade Barry to continue)
A less arbitrary way to define a sample space is to take the set of all possible worlds.
And how would you know which worlds are possible and which are not?
How would Albert and Barry use the framework of "possible worlds" to help them resolve their disagreement?
But for subjective probability theory a "sample space" isn't even needed at all. A probability function can simply be defined over a Boolean algebra of propositions. Propositions ("events") are taken to be primary instead of being defined via primary outcomes of a sample space.
This simply passes the buck of the question from "What is the sample space corresponding to a particular problem?" to "What is the event space corresponding to a particular problem?". You've renamed your variables, but the substance of the issue is still the same.
How would you know, whether
or
for a dice roll?
By picking your axioms you logically pinpoint what you are talking in the first place. Have you read Highly Advanced Epistemology 101 for Beginners? I'm noticing that our inferential distance is larger than it should be otherwise.
No, you are missing the point. I'm not saying that this phrase has to be axiom itself. I'm saying that you need to somehow axiomatically define your individual words, assign them meaning and only then, in regards to these language axioms the phrase "Bachelors are unmarried" is valid.
You've drawn the graph yourself, how meaning is downstream of reality. This is the kind of entanglement we are talking about. The choice of axioms is motivated by our experience with stuff in the real world. Everything else is beside the point.
Yes. That's, among other things, what not being instrumentally exploitable "in principle" means. Epistemic rationality is a generalisation of instrumental rationality the same way how arithmetics is a generalisation from the behaviour of individual objects in reality. The kind of beliefs that are not exploitable in any case other than literally adversarial cases such as a mindreader specifically rewarding people who do not have such beliefs.
I think the problem is that you keep using the word Truth to mean both Validity and Soundness and therefore do not notice when you switch from one to another.
Validity depends only on the axioms. As long as you are talkin about some set of axioms in which P defined in such a way that P(A) ≥ P(A&B) is a valid theorem, no appeal to reality is needed.
Likewise, you can talk about a set of axioms where P(A) ≤ P(A&B). These two statements remain valid in regards to their axioms.
But the moment you claim that this has something to do with the way beliefs - a thing from reality - are supposed to behave you start talking about soundness, and therefore require a connection to reality. As soon as pure mathematical statements mean something you are in the domain of map-territory relations.
Territory behaved the way that we now can describe in the map as 2+3=5. But no maps existed back then. If we are in agreement about it, there is nothing substabtial to argue about.