Comment author: [deleted] 03 September 2014 04:18:44AM *  1 point [-]

I feel like this is circular: you state your claim, I state my rebuttal, you concede in qualification, and then you return to your original claim.

I need to know how you came to that conclusion, which is slightly ambiguous here, in the sense that I can't understand the claim independently of the linguistic practice in terms of which your intended meaning is given.

In the case of basic and well-worn facts about the natural world, I think I understand their utterance - although I could be unaware of a particular convention or idiom - because I am already very aware of the linguistic practices which endow them which intersubjective force (if I was a peasant in the Holy Roman Empire, I would doubtlessly have no idea what you were attempting to convey or do).

Comment author: Keith_Coffman 03 September 2014 04:29:26AM 1 point [-]

Alright, since you could not verify the Earth being round without knowing my belief structure...

2+2 = 4

You don't know my belief structure. Is it true?

I'm not asking you if you know that off the top of your head, I'm asking if you could go out and check to see if it's actually true!

That's what I mean by evaluating a claim - can you verify it? I'm sorry, but it's asinine to say that you cannot verify it because you don't know how I came to the conclusion. You seem to be arguing something about sharing my language as maintaining your point. I'm past that. If you understand the claim, you can test it.

Comment author: [deleted] 03 September 2014 03:50:47AM *  1 point [-]

I understand exactly what you're saying, but the qualification is divergent from your initial statement, from which this discussion arose, and to which you returned in the second paragraph cited above:

"we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim"

A condition of evaluating the veracity of an utterance is to register the utterance as intelligible, for which the aforementioned considerations to context are necessary, i.e. 'how someone formed their beliefs'.

Comment author: Keith_Coffman 03 September 2014 03:59:34AM *  1 point [-]

If it is divergent, then this

Let me distill this and see if you follow: We need to know what a claim is actually claiming - that can depend on context. Given that you do know what a claim is claiming, its veracity does not depend on context, nor the belief structure of the person behind the claim.

is what I meant. To provide an example, (which can quite often help in these situations):

I claim that the earth is approximately round.

You don't need to know how I came to that conclusion in order to evaluate my claim.

Had I claimed something a bit more complex, maybe related to the society that I currently live in, then you would probably need to know something about my society in order to see if my claim was correct. But you actually wouldn't need to know how I came to the conclusion - you just need to know what I'm talking about.

Comment author: [deleted] 03 September 2014 03:32:06AM *  1 point [-]

This appears to in one stroke admit qualification:

"Thus we can see how there are a few things that we need to keep in mind when we address a claim, much as you have said above. However, the truth of the claim, given that you understand the meaning and you are evaluating it at a particular time, does not depend on the belief structure."

And in the next revoke it:

"The reason I said "we shouldn't really care how someone formed their beliefs" is because the words that followed are "when evaluating the veracity of a claim," i.e. whether or not it is accurate. This is entirely independent of the person's reasons for making the claim."

The truthful content of a claim is not independent of the utterances which comprise it, such than an understanding of those utterances is a condition of finding intelligible that claim and thus the candidature of that claim for truth/falsity.

Comment author: Keith_Coffman 03 September 2014 03:40:04AM 1 point [-]

Let me distill this and see if you follow:

We need to know what a claim is actually claiming - that can depend on context.

Given that you do know what a claim is claiming, its veracity does not depend on context, nor the belief structure of the person behind the claim.

Comment author: [deleted] 02 September 2014 09:17:36PM 1 point [-]

"I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim".

This is an absurd proposition on several accounts. Firstly, a great deal of utterance meaning can only be recovered relative to a particular context, for it has complex and variable uses shifting within and across contexts, i.e. the exchange of agreement formalising marriage is not a mono-semantical reference to an internal psychological state, but does something only understandable relative to a particular convention of marriage. The upshot being that a condition of intelligibility is contextual awareness. Secondly, it is important to at least be aware of the structures of understanding through which particular intellectual subcultures and traditions give rise to scholarly output (i.e. you can't satisfactorily understand and evaluate a Marxist-Leninist work independently the sociological reality of post-Cold War vanguard parties, or modern European intellectual history).

Comment author: Keith_Coffman 03 September 2014 03:03:10AM *  1 point [-]

The meaning of a claim can, in fact, change based on the context. Moreover, the truth of a claim may change with time (for instance, the claim "Elvis is alive" was at one point true and is now false. Also note that, in the context of me making up a simple example of a claim to demonstrate my point, the meaning is likely referring to the famous performer Elvis Presley rather than any person named Elvis.

Thus we can see how there are a few things that we need to keep in mind when we address a claim, much as you have said above. However, the truth of the claim, given that you understand the meaning and you are evaluating it at a particular time, does not depend on the belief structure.

The reason I said "we shouldn't really care how someone formed their beliefs" is because the words that followed are "when evaluating the veracity of a claim," i.e. whether or not it is accurate. This is entirely independent of the person's reasons for making the claim.

Comment author: bigjeff5 04 February 2011 05:59:43PM 0 points [-]

Yeah, it's a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It's pretty difficult to avoid, since it's so natural, but I think the key is when you get surprised like that (and even if you don't), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don't go back and audit it.

Comment author: Keith_Coffman 03 September 2014 02:53:43AM 0 points [-]

I think we should also separate the subjects of the psychology behind when this might happen and whether or not we are using scales.

It may indeed be the case that people are bad accountants (although I rarely find myself assuming these implied things, and further if I find that my assumptions are wrong I adjust accordingly), but this doesn't change the fact that we are adding +/- points (much like you're keeping score/weighing the two alternatives).

Assuming a perfectly rational mind was approaching the proposition of reactor A vs reactor B (and we can even do reactor C...), then the way it would decide which proposition is best is by tallying the pros/cons to each proposition. Of course, in reality we are not perfectly rational and moreover different people assign different point-values to different categories. But it is still a scale.

Comment author: Eliezer_Yudkowsky 13 March 2007 06:27:17PM 5 points [-]

Hal, even on binary decisions, the affect heuristic still leads to double-counting the evidence. If being told that the plant produces less waste causes us to feel, factually incorrectly, that the plant is less likely to melt down, then the same argument is being counted as two weighting factors instead of one.

Comment author: Keith_Coffman 03 September 2014 02:41:30AM *  0 points [-]

I would call coming to conclusions like this a shortcoming of our rational thinking, rather than the weighing of benefits and costs to a decision. What HalFinney said is completely right, in that we very often have to pick alternatives as a package, and in doing so we are forced to weigh factors for and against a proposition.

Personally, I wouldn't have "factually incorrectly" jumped to the conclusion you stated here (especially if the converse is stated explicitly as you did here), and I think this is a diversion to the point that you are necessarily (and rationally) weighing between two alternatives in this particular example that you chose.

That being said, I wholeheartedly agree with the idea of evaluating claims based on their merits rather than the people who propose them - that's the rational way to do things - and rational people would indeed keep a notebook even if, in the end, it was going to end up on a scale (or a decision matrix).

Comment author: Stefan_Schubert 08 August 2014 04:05:00PM 3 points [-]

That's interesting. A colleague of mine raised a similar issue, namely that in a popular science book you don't necessarily want to complicate things by including countervailing factors. In your terms, you settle for the briefest possible explanation. Diamond's and Pinker's books are directed both towards the scientific community and towards the general public, so it's a bit of a tricky case, but since they are such high-profile scientists and since their books have been so influential, I think it is legitimate to criticize them on this score.

A perhaps more glaring example is this. Man City won Premier League 2012 on goal difference thanks to a 94 minute goal which put them ahead of Man Utd. Afterwards, a Swedish pundit was asked to explain why Man City won the Premier League. This is in a sense absurd, since it's clear that if a 38-matches league is settled on overtime of the last game, there is very little that distinguishes the two team in terms of quality. But the pundit's reaction was also absurd: he went on to provide 4-5 reasons for why Man City was better than Man Utd, to which my reaction was, well, if they're better on so many scores, then why didn't they finish like 20 points ahead? The "briefest possible explanation" defense doesn't work here, since it would have been easier just to give one reason, and more adequate given the small difference between the teams, than 4-5. Instead, I believe that he did so because of a deeply felt urge to tell a "story". I think that the halo effect is at play here. Our system 1 wants to tell one-sided stories where the winning team had all the advantages and the losing team was worse across the board.

Now Diamond and Pinker are obviously better than football pundits, but I don't think that the examples are fundamentally different. They, too, are most likely to some degree engaging in story-telling.

Comment author: Keith_Coffman 03 September 2014 02:17:26AM 2 points [-]

Sorry for following you around so much (I just read this article since you linked to it in our other discussion)

There are two main points, both of which have largely been said or touched on already in your discussion here:

1) When discussing an event or something "playing out," we are talking about a cause and effect. Despite the fact that many things in life have many factors, there are always positive causes for things, which may or may not have counteracting factors. When we want to describe an effect of interest, then the simplest way to do it is to list the cause(s).

2) There are several factors (that I've thought of off the top of my head) that play into what kinds of points you provide when you are presenting a cause/effect relationship:

The first (which DavidAgain mentioned somewhat already) is whether you are trying to describe something that has happened or something that will happen. When we don't know what the outcome of something will be, we must exhaustively weigh all of the factors that we know of and their possible interactions in order to come to the best conclusion about the result. (Really there are two variations on this: what action should be taken vs. what will happen given the current state of the world, but the concept holds in each). If, however, something has already happened, it is reasonable to focus on the causes, A) because we know that they ended up "winning" and B) because there may or may not be negating factors involved in the first place.

If I say something along the lines of "I went swimming today because I was hot," it is not dishonest/biased to refrain from mentioning the fact that I weighed this course of action against several reasons not to do so - the important, primary causation was relayed in the statement and satisfies most people to the extent that they care about the factors involved.

Another factor that might be relevant is how contentious the subject is; even if you are debating something in the past, such as why X happened (or offering a proposal for why X happened), if the conclusion to be drawn is not readily agreed upon then it is prudent to first make sure that all of the relevant facts are presented. On the other hand, if you're trying to teach/explain why something happened in a non-contentious atmosphere, then it may be reasonable to omit facts that are unimportant to maintain coherency and avoid getting bogged down in clutter that doesn't matter to the overarching point. Which category Diamond's book falls under is a bit unclear, but I still am not convinced that it was biased to provide causes without enumerating all of the pros/cons, given that you trust him to the extent that he is telling the truth when we says that the Fertile Crescent was a highly, if not the most advantageous locations for the start of agriculture.

I am on the fence as to whether or not Jared Diamond was slightly biased in this case, but I think it depends on whether you look at his book from the perspective of a comprehensive argument/claim or a proposition of a different mechanism behind how things ended up the way they did which may or may not account for all of history in its complex entirety.

Anyway, I think trying to infer bias based on the presence of pros/cons is a difficult subject. I wouldn't go claiming someone is biased towards something for only presenting a positive message necessarily, even though this is often the case. Even in the example with the teams coming very close to a tie, the response to "why did they win" may have been correct, in that they had all of those factors in their favor and that those were enough to win (barely). I agree that in this case the guy was biased, but on the other hand they didn't ask him "what factors were involved and why did they favor Man City (somewhat)?"

That's about all I'll say for one response - I have a bad tendency of rambling on when I've already made the points that I really wanted to make.

(by the way DavidAgain I loved the way you said the things I was thinking with each consecutive response - I was vicariously participating in the discussion through your comments!)

Comment author: Stefan_Schubert 02 September 2014 07:06:31PM *  1 point [-]

This is a bit of a side-track. For the Bayesian interpretation of probability, it's important to be able to assign a prior probability to any event (since otherwise you can't calculate the posterior probability, given some piece of evidence that makes the event more or less probable). They do this using, e.g. the much contested principle of indifference. Some people object to this, and argue along your lines that it's just silly to ascribe probabilities to events we know nothing about. Indeed, the frequentists define an event's probability as the limit of its relative frequency in a large number of trials. Hence, to them, we can't ascribe a probability to a one-off event at all.

Hence there is a huge discussion on this already and I don't think that it's meaningful for us to address it here. Anyway, you do have a point that one should be a bit cautious ascribing definite probabilities to events we know very little about. An alternative can be to say that the probability is somewhere in the interval from x to y, where x and y are some real numbers betwen 0 and 1.

Comment author: Keith_Coffman 02 September 2014 08:43:38PM 1 point [-]

I agree that it is largely off-topic and don't feel like discussing it further here - I would like to point out that the principle of indifference specifies that your list of possibilities must be mutually exclusive and exhaustive. In practice, when dealing with multifaceted things such as claims about the effects of changing the minimum wage, an exhaustive list of possible outcomes would result in an assignment of an arbitrarily small probability according to the principle of indifference. The end effect is that it's a meaningless assignment and you may as well ignore it.

Comment author: Stefan_Schubert 01 September 2014 11:26:02AM 0 points [-]

There's a whole bunch of information out there - literally more than any one person could/cares to know - and we simply don't have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren't.

In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.

It would seem to me that these claims aren't consistent. I agree with the first claim, not with the second. It's true that experts' claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we're better off deferring to experts (which we actually also do, as you say, on a massive scale).

I wrote a very long post on a related theme - "genetic arguments" - some time ago, by the way.

that bit about the 50% chance of being true is ridiculous even if you don't have any knowledge going into it - you would simply say "I don't know if these claims are true"

Well according to the betting interpretation of degrees of belief, this just means that you would, if rational, be willing to accept bets that are based on the claim in question having a 50 % chance of being true (but not bets based on the claim that it has, say, a 51 % chance of being true). But sure, sometimes it can seem a bit contrived to assign a definite probability to claims you know little about.

I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:

I don't agree with that. We use others' statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes - e.g. bias - then that's clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.

Also, as I said, if we find out that someone is biased, then we have little reason to use that person as a source of knowledge.

What I want to stress is the need for cognitive economy. We don't have time to check the direct evidence for different claims lots of the time (as you yourself admit above) and therefore have to use assessments of others' reliability. Knowledge about bias is a vital (but not the only) ingredient in our assessments of reliability, and are hence extremely useful.

Comment author: Keith_Coffman 01 September 2014 04:40:13PM 2 points [-]

I'm making a separate reply for the betting thing, only to try to keep the two conversations clean/simple.

Let's muddle through it: If I have a box containing an unknown (to you) number of gumballs and I claim that there are an odd number of gumballs, you would actually be quite reasonable in assigning a 50% chance to my claim being true.

If I claim that the gumballs in the box are blue, would you say there is a 50% chance of my claim being true?

What if I claimed that I ate pizza last night?

You might have a certain level of confidence in my accuracy and my reliability as a person to not lie to you; and, if someone was taking bets, you would probably bet on how likely I am to tell the truth, rather than assuming there was a 50% chance that I ate pizza last night.

If you you then notice that my friend, who was with me last night, claims that I in fact ate pasta, then you have to weigh their reliability against mine, and more importantly now you have to start looking for reasons that we came to different conclusions about the same dinner. And finally, you have to weigh the effort it takes to vet our claims against how much you really care what I ate last night.

So, assuming you are rational, would you bet 50/50 that I ate pizza? Or would you just say "I don't know" and refuse to bet in the first place?

Comment author: Stefan_Schubert 01 September 2014 11:26:02AM 0 points [-]

There's a whole bunch of information out there - literally more than any one person could/cares to know - and we simply don't have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren't.

In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.

It would seem to me that these claims aren't consistent. I agree with the first claim, not with the second. It's true that experts' claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we're better off deferring to experts (which we actually also do, as you say, on a massive scale).

I wrote a very long post on a related theme - "genetic arguments" - some time ago, by the way.

that bit about the 50% chance of being true is ridiculous even if you don't have any knowledge going into it - you would simply say "I don't know if these claims are true"

Well according to the betting interpretation of degrees of belief, this just means that you would, if rational, be willing to accept bets that are based on the claim in question having a 50 % chance of being true (but not bets based on the claim that it has, say, a 51 % chance of being true). But sure, sometimes it can seem a bit contrived to assign a definite probability to claims you know little about.

I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:

I don't agree with that. We use others' statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes - e.g. bias - then that's clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.

Also, as I said, if we find out that someone is biased, then we have little reason to use that person as a source of knowledge.

What I want to stress is the need for cognitive economy. We don't have time to check the direct evidence for different claims lots of the time (as you yourself admit above) and therefore have to use assessments of others' reliability. Knowledge about bias is a vital (but not the only) ingredient in our assessments of reliability, and are hence extremely useful.

Comment author: Keith_Coffman 01 September 2014 04:26:19PM *  2 points [-]

There's a whole bunch of information out there - literally more than any one person could/cares to know - and we simply don't have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren't. In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.

It would seem to me that these claims aren't consistent. I agree with the first claim, not with the second. It's true that experts' claims are objectively and directly verifiable, but lots of the time checking that direct evidence is not an optimal use of our time. Instead we're better off deferring to experts (which we actually also do, as you say, on a massive scale).

I think we are in agreement but my second statement didn't have the caveats it should have; I doubt you would disagree with the first half, that reality is objective. You disagreed with the second half, that claims should be evaluated based on evidence -- not because it's a false statement, but rather that, in practice, we cannot reasonably be expected to do this for every claim we encounter. I agree. The unstated caveat is that we should trust the experts until there is a reason to think that their claims are poorly founded, i.e. they have demonstrated bias in their work or there is a lack of consensus among experts in a similar field.

I guess the main thing I am trying to say that directly ties into your post is that we shouldn't really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:

I don't agree with that. We use others' statements as a source of evidence on a massive scale (i.e. we defer to them. Indeed, experiments show that we do this automatically. But if these statements express beliefs that were produced by unreliable processes - e.g. bias - then that's clearly not a good strategy. Hence we should care very much of whether someone is biased when evaluating the veracity of many claims, for that reason.

Hold on now, you did read my bullets right? When we should care is:

  • When we suspect that a bias may have lead to a false reporting of real information (in which case we would want independent, unbiased research/reporting)

Notice that I actually did say suspicion of bias is an exception to the "not caring" statement. In other words, unless we have a reason to suspect a bias, (and/or the second bullet) then we probably won't care. There can be other ways of bad conclusions being drawn; the reason I mention bias is because it is systematic. If we see a trend of a particular person systematically coming to poor conclusions, whatever their reason, then our confidence in their input would fall. On the other hand, experts are human and can make mistakes as well - we should not dismiss someone for being wrong once but for being systematically wrong and unwilling to fix the problem. If we really care about high confidence in something, for instance in the cases where the truth of the claim is important to a lot of people and we want to avoid being mislead if there are a few biased opinions, we seek the consensus.

Now, can we get the consensus all of the time? Unfortunately not. Not even most of the time. So what's our next line of defense? Well, one of them is journalistic integrity; frankly I don't even want to go there, but if done properly there are people whose job it is to sort through these very things - but really let's not go there for now. The last line of defense is yourself and the actual work of checking on things yourself.

If a claim is important enough for you to really care whether or not it's accurate, then you have to be willing to do a little bit of digging yourself. Now I realize that the entire point of this post was to avoid just that thing and to have computers do it automagically; but really, if it is important enough for you to check on it yourself, rather than just trusting your regular sources of information, then would you be willing not to check just because a program said that this guy was unbiased?

That might be a bit of an unfair characterization of what you're discussing, but there is a distinction to be made between using online behavior to measure/understand the general population's belief structure and to check for bias in expert opinions.

I think the idea of understanding the population's belief structures would still be extremely useful in it's own right though, per my second bullet in the exceptions to the "don't care" statement - particularly if someone wants to change a lot of people's minds about something. If you have a campaign (be it political or social), then understanding how people have structured their beliefs would give you a road map for how best to go about changing them in the way you want. To some extent, this is how it's already been done historically, but it was not done via raw data analysis.

View more: Prev | Next