Logos01 comments on Practicing what you preach - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (294)
To fail to allow for others to be mistaken when weighting your own beliefs is to risk forming false beliefs yourself. Furthermore; establishing the reliability of a person's mechanisms for establishing a belief is necessary for any given specific claim before expertise on said claim can be validated. The process of establishing that expertise then becomes the argument, rather than the mere assertion of the expert.
We use trust systems -- trusting the word of experts without investigation -- not because it is a valid practice but because it is a necessary failing of the human condition that we lack the time and energy to properly investigate every possible claim.
You must of course allow for the possibility of the other person being mistaken, otherwise you would simply substitute their probability estimate for your own. But to fail to update on the fact of someone's belief prior to obtaining further information on the reliability of their mechanisms for determining the truth means defaulting to an assumption of zero reliability.
One should always assign zero reliability to any statement in and of itself, at which point it is the reliability of said mechanisms which is the argument, rather than the assertion of the individual himself. I believe I stated something very much like this already.
-- To rephrase this: it is not enough that Percival the Position-Holder tell me that Elias the Expert believes X. Elias the Expert must demonstrate to me that his expertise in X is valid.
If you have no evidence that Elias the Expert has any legitimate expertise, then you can reasonably weight his belief no more heavily than any random person holding the same belief.
If you know that he is an expert in a legitimate field that has a track record for producing true information, and he has trustworthy accreditation as an expert, you have considerably more evidence of his expertise, so you should weight his belief more heavily, even if you do not know the mechanisms he used to establish his belief.
Suppose that a physicist tells you that black holes lose mass due to something called Hawking radiation, and you have never heard this before. Prior to hearing any explanation of the mechanism or how the conclusion was reached, you should update your probability that black holes lose mass to some form of radiation, because it is much more likely that the physicist would come to that conclusion if there were evidence in favor of it than if there were not. You know enough about physicists to know that their beliefs about the mechanics of reality are correlated with fact.
No. What you should do is ask for a justification of the belief. If you do not have the resources available to you to do so, you can fail-over to the trust system and simply accept the physicist's statement unexamined -- but utilization of the trust-system is an admission of failure to have justified beliefs.
I know enough about physicists, actually, to know that if they cannot relate a mechanism for a given phenomenon and a justification of said phenomenon upon inquiry that I have no reason to accept their assertions as true, as opposed to speculation. If I am to accept a given statement on any level higher than "I trust so" -- that is, if I am to assign a high enough probability to the claim that I would claim myself that it were true -- then I cannot rely upon the trust system but rather must have a justification of belief.
Justification of belief cannot be "A person who usually is right in this field claims this is so" but can be "A person who I have reason to believe would have evidence on this matter related to me his assessment of said evidence."
The difference here is between having a buddy who is a football buff who tells you what the Sportington Sports beat the Homeland Highlanders by last night -- even though you don't know whether he had access to a means of having said information -- as opposed to the friend you know watched the game who tells you the scores.
If you want to increase the reliability of your probability estimate, you should ask for a justification. But if you do not increase your probability estimate contingent on the physicist's claim until you receive information on how he established that belief, then you are mistreating evidence. You don't treat his claim as evidence in addition to to evidence on which it was conditioned, you treat it as evidence of the evidence on which it was conditioned. Once you know the physicist's belief, you cannot expect to raise your confidence in that belief upon receiving information on how he came to that conclusion. You should assign weight to his statement according to how much evidence you would expect a physicist in his position to have if he were making such a statement, and then when you learn what evidence he has you shift upwards or downwards depending on how the evidence compares to your expectation. If you revised upwards on the basis of the physicist's say-so, and then revised further upwards based on his having about as much evidence as you would expect, that would be double-counting evidence, but if you do not revise upwards based on the physicist's claim in the first place, that would be assuming zero correlation of his statement with reality.
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
Anything that is more likely if a belief is true than if it is false is evidence which should increase your probability estimate of that belief. Have you read An Intuitive Explanation of Bayes' Theorem, or any of the other explanations of Bayesian reasoning on this site?
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won, weighted according to your estimate of how likely his claim is to correlate with reality. If you know that he watched the game, you're justified in assuming a very high correlation with reality (although you also have to condition your estimate on information aside from whether he is likely to know, such as how likely he is to lie.) If you do not know that he watched the game last night, you will have a different estimate of the strength of his claim's correlation with reality.
I have read them repeatedly, and explained the concepts to others on multiple occassions.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source -- it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
Which requires a reason to believe that to be the case. Which in turn requires that you have a means of corroborating their claim in some manner; the least-sufficient of which being that they can relate observations that correlate to their claim, in the case of experts that is.
A probability estimate without reliability is no estimate. Revising beliefs based on unreliable information is unsound. Experts' claims which cannot be corroborated are unsound information, and should have no weighting on your estimate of beliefs solely based on their source.
If an expert's claims are frequently true, then it can become habitual to trust them without examination. However, trusting individuals rather than examining statements is an example of a necessary but broken heuristic. We find the risk of being wrong in such situations acceptable because the expected utility cost of being wrong in any given situation, as an aggregate, is far less than the expected utility cost of having to actually investigate all such claims.
The more such claims, further, fall in line with our own priors -- that is, the less 'extraordinary' the claims appear to be to us -- the more likely we are to not require proper evidence.
The trouble is, this is a failed system. While it might be perfectly rational -- instrumentally -- it is not a means of properly arriving at true beliefs.
I want to take this opportunity to once again note that what I'm describing in all of this is proper argumentation, not proper instrumentality. There is a difference between the two; and Eliezer's many works are, as a whole, targetted at instrumental rationality -- as is this site itself, in general. Instrumental rationality does not always concern itself with what is true as opposed to what is practically believable. It finds the above-described risk of variance in belief from truth an acceptable risk, when asserting beliefs.
This is an area where "Bayesian rationality" is insufficient -- it fails to reliably distinguish between "what I believe" and "what I can confirm is true". It does this for a number of reasons, one of which being a foundational variance between Bayesian assertions about what kind of thing a Bayesian network is measuring when it discussed probabilities as opposed to what a frequentist is asserting is being measured when frequentists discuss probabilities.
I do not fall totally in line with "Bayesian rationality" in this, and various other, topics, for exactly this reason.
What? No they aren't. They are massively biased towards epistemic rationality. He has written a few posts on instrumental rationality but by and large they tend to be unremarkable. It's the bulk of epistemic rationality posts that he is known for.
Really? In that case you should hopefully be able to interact correctly with probabilities like p(Elias asserts X | X is true) and p(Elias asserts X | X is false).
It ought to prevent you from making errors like this:
Assuming he was able to explain them correctly, which I think we have a lot of reason to doubt.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don't know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement's correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The "trust mechanism," as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others' statements on what we know about their likely mechanisms and motives.
You can't confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you're doing something wrong, but there is no point where a "mere belief" transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that "counts for actually knowing things" separate from that which doesn't.
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend's accuracy and the current instance, such beliefs are invalid.
Yup. I said as much.
Yes, actually, it is a separate mechanism.
Yes, yes. That is the Bayesian standard statement. I'm not persuaded by it. It is, by the way, a foundational error to assert that absolute knowledge is the only form of knowledge. This is one of my major objections to standard Bayesian doctrine in general; the notion that there is no such thing as knowledge but only beliefs of varying confidence.
Bayesian probability assessments work very well for making predictions and modeling unknowns, but that's just not sufficient to the question of what constitutes knowledge, what is known, and/or what is true.
And with that, I'm done here. This conversation's gotten boring, to be quite frank, and I'm tired of having people essentially reiterate the same claims over and over at me from multiple angles. I've heard it before, and it's no more convincing now than it was previously.
This is frustrating for me as well, and you can quit if you want, but I'm going to make one more point which I don't think will be a reiteration of something you've heard previously.
Suppose that you have a circle of friends who you talk to regularly, and a person uses some sort of threat to force you to write down every declarative statement they make in a journal, whether they provided justifications or not, until you collect ten thousand of them.
Now suppose that they have a way of testing the truth of these statements with very high confidence. They make a credible threat that you must correctly estimate the number of the statements in the journal that are true, with a small margin of error, or they will blow up New York. If you simply file a large number of his statements under "trust mechanism," and fail to assign a probability which will allow you to guess what proportion are right or wrong, millions of people will die. There is an actual right answer which will save those people's lives, and you want to maximize your chances of getting it. What do you do?
Let's replace the journal with a log of a trillion statements. You have a computer that can add the figures up quickly, and you still have to get very close to the right number to save millions of lives. Do you want the computer to file statements under "trust mechanism" or "confirmed knowledge" so that it can better determine the correct number of correct statements, or would you rather each statement be tagged with an appropriate probability, so that it can add them up to determine what number of statements it expects to be true?
If you don't assume that the coin is fair, then certainly a coin coming up heads twenty times and tails ten times is evidence in favor of it being more likely to come up heads next time, because it's evidence that it's weighted so that it favours heads.
Similarly if a person is weighted so that they favor truth, their claims are evidence in favour of that truth.
Beliefs like trusting the trustworthy and not trustring the untrustworthy, whether you consider them "valid" beliefs or not, are likely to lead one to make correct predictions about the state of the world. So such beliefs are valid in the only way that matters for epistemic and instrumental rationality both.
If in 30 coin flips have occurred with it being that far off, I should move my probability estimate sllightly towards the coin being weighted to one side. If for example, the coin instead had all 30 flips heads, I presume you would update in the direction of the coin being weighted to be more likely to come down on one side. It won't be 2x as more likely because the hypothesis that the coin is actually fair started with a very large prior. Moreover, the easy ways to make a coin weighted make it always come out on one side. But the essential Bayesian update in this context makes sense to put a higher probability on the coin being weighted to be more likely to comes up heads than tales.