Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Mainstream Epistemology for LessWrong, Part 1: Feldman on Evidentialism

16 Post author: ChrisHallquist 16 November 2013 04:16PM

Richard Feldman's Epistemology is a widely-used philosophy textbook published in 2003. I've decided to write a series of posts summarizing its contents, because it contains some surprisingly reasonable views (given what you may have heard about mainstream philosophy), and it also counters some common myths about what all philosophers supposedly know about evidence, the problem of induction, and so on. This installment briefly covers the first three chapters before moving on to Feldman's discussion of the philosophical view he calls evidentialism.


Chapter 1 is an introductory chapter, which explains that "The theory of knowledge, or epistemology, is the branch of philosophy that addresses philosophical questions about knowledge and rationality." Feldman says he will use the term "The Standard View" to refer to the collection of things we ordinarily think about knowledge and rationality.

On the Standard View, we know a large variety of things about a variety of topics, from science and mathematics to other people's mental states to the past and the future. Furthermore, the Standard View says that our primary sources of knowledge consist of perception, memory, testimony, introspection, reasoning, and rational insight. Those are just the sources of knowledge most people would agree on, though—Feldman acknowledges that some people might want to add to that list.

It's worth mentioning here that because philosophers agree on so little, when you read a philosophy textbook you should assume it will say some things that are just the author's personal opinion, rather than representing any professional consensus. And in describing the "Standard View," Feldman is mostly just trying to describe what ordinary people commonsensically believe, rather than claiming that it's the standard view within philosophy. But in fact, most philosophers probably would agree with what Feldman calls the "Standard View."

Chapter 1 also sketches the structure of the rest of the book: chapters 2-5 will develop the Standard View, while chapters 6-9 will consider various challenges and objections to the Standard View.

I'm going to mostly skip over chapters 2 and 3 because they cover a debate that was pretty accurately summarized by Luke Muelhlauser and Louie Helm in "Intelligence Explosion: Machine Ethics":

Since Plato, many have believed that knowledge is justified true belief. Gettier (1963) argued that knowledge cannot be justified true belief because there are hypothetical cases of justified true belief that we intuitively would not count as knowledge. Since then, each newly proposed conceptual analysis of knowledge has been met with novel counterexamples (Shope 1983). Weatherson (2003) called this the “analysis of knowledge merry go round.”

That said, it's worth briefly explaining Feldman's own solution to the problem, which will be relevant for understanding chapter 4. With rare exceptions, philosophers generally agree that true belief is necessary, but not sufficient, for knowledge. The question is how to replace or supplement the justification condition. Feldman's answer is to keep the justification condition, and furthermore add that knowledge requires that ones justification for a belief "does not essentially depend on any falsehood." (If you click the link to Gettier's paper, above, and read Gettier's examples, hopefully you will understand why this is appealing.)

Feldman includes a disclaimer saying, "The idea of essential dependence is admittedly not completely clear. However, it gives us a useful working definition of knowledge with which we can proceed." Being willing to invoke a somewhat unclear condition in his analysis of knowledge is probably a wise move on Feldman's part, given the history of attempts at more exact analyses being felled by clever counterexamples. And his analysis gives him a rationale for focusing on the next chapter on what it takes for a belief to be justified.

Chapter 4 is dedicated to discussing evidentialism, which Feldman defines as the view that a belief is justified for a particular person if and only if their evidence supports that belief. This is a view Feldman has defended in a number of journal articles, many of them co-authored with Earl Conee (see their anthology, Evidentialism). That this view has defenders within mainstream philosophy may surprise those who've heard that there's supposed to be a philosophical consensus that requiring beliefs to be based on evidence is self-defeating (no such consensus exists, but some philosophers claim it does).

Feldman emphasizes that evidentialism, in the form he defends, is an epistemological claim, not a moral or prudential one. He's not interested in defending the claim, found in William Clifford's famous essay "The Ethics of Belief," that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence" (at least, not if the claim is taken to be a moral one).

This is how Feldman responds to the "loyalty" objection to evidentialism, which argues that it is sometimes right to, for example, believe in a friend's innocence even when the evidence does not support that conclusion. Feldman allows that that might be right as a matter of morality (he doesn't say it is right), but says that doesn't change what the rational thing to believe in such a case is.

The rest of the chapter is a fairly long (given that book as whole is only 200 pages) discussion of the infinite regress argument, foundationalism, and coherentism. It's worth interjecting that, while Feldman presents the discussion not in terms of objections to evidentialism, but in terms of "ways in which the details of evidentialism might be spelled out," some philosophers have claimed the infinite regress argument as an argument for radical skepticism, while others have claimed it shows (as a matter of philosophical consensus!) that evidentialism specifically leads to radical skepticism.

Feldman's discussion could be read as a gentle rebuttal to these claims, even though he never addresses them directly. His version of the infinite regress argument is formulated as an argument for the existence of justified basic beliefs, which he defines as beliefs that are justified, but "not justified on the basis of any other beliefs." It runs as follows:

  1. Either there  are justified basic beliefs or each justified belief has an evidential chain that either: (a) terminates in an unjustified belief (b) is an infinite regress of beliefs (c) is circular.
  2. But beliefs based on unjustified beliefs are not themselves justified, so no justified belief could have an evidential chain that terminates in an unjustified belief (that is, not (a)).
  3. No person could have an infinite series of basic beliefs, so no justified belief could have an evidential chain that is an infinite regress of beliefs (that is, not (b)).
  4. No belief could be justified by itself, so no justified belief could have an evidential chain that is circular (that is, not (c)).
  5. Therefore, there are justified basic beliefs.

Feldman lists three main responses to this argument: foundationalism, which accepts the argument's conclusion; coherentism, which rejects premise 4; and skepticism, which says the argument goes wrong assuming that there are justified beliefs in the first place and in fact no beliefs can be justified. In this chapter, Feldman's focuses on foundationalism and coherentism, leaving skepticism for later chapters.

First, he considers a view he calls "Cartesian foundationalism," named after René Descartes, though Feldman admits "that it is unlikely that Descartes actually would agree to all aspects of the view to be described." Cartesian foundationalism, in Feldman's sense, combines foundationalism with three further claims:

  1. Beliefs about one's own inner states of mind (appearance beliefs) and beliefs about elementary truths of logic are justified basic beliefs.
  2. Justified basic beliefs are justified because we cannot be mistaken about them. We are "infallible" about such matters.
  3. The rest of our justified beliefs (e.g., our beliefs about the external world) are justified because they can be deducded from our basic beliefs.

People who know something of Descartes' reputation within contemporary philosophy will not be surprised to find out that Feldman sees lots of problems with this view, including that our beliefs about our own inner states of mind aren't infallible, and that much of what what we know cannot be deduced from beliefs about our own inner states of mind.

Next, Feldman discusses coherentism. The challenge for coherentism is developing it in a way that doesn't involve obvious circularity. As Feldman explains:

So coherentists reject premise (1-4) of The Infinite Regress Argument, the step of the regress argument that rejects circular evidential chains. This is not because they think that you can justify one belief by another, that second by a third, and then justify the third by appeal to the first. Rather, their idea is that justification is a more systematic and holistic matter, that each belief is justified by the way it fits into one's overall system of beliefs.

Feldman discusses various ways to develop this idea, but doesn't find any "suitable," because among other things of difficulties with giving a coherentist account of which beliefs are and are not justified, and with saying what coherence actually is. He also discusses the "isolation argument" against coherentism, which complains that coherentism seems to allow any beliefs to be justified when included in the right set of other beliefs, even if the whole set of beliefs is totally detached from reality. The point is that what matters for beliefs being justified isn't just other beliefs, but also things like our sensory experiences.

Finally, Feldman discusses his own preferred view, which he calls "modest foundationalism," which claims:

  1. Basic beliefs are spontaneously formed beliefs. Typically, beliefs about the external world, including beliefs about the kinds of objects experienced or their sensory qualities, are justified and basic. Beliefs about mental states can also be justified and basic.
  2. A spontaneously formed belief is justified provided it is a proper response to experiences and is not defeated by other evidence the believer has.
  3. Nonbasic beliefs are justified when they are supported by strong inductive inferences—including enumerative induction and inferences to the best explanation—from justified basic beliefs.

The main objection Feldman discusses to this view comes from a coherentist angle: Laurence BonJour's argument that there are no justified basic beliefs. The details of that discussion, though, are less interesting than this simple fact: the suggestion made by some anti-evidentialist philosophers that evidentialism is indisputably self-defeating is just plain wrong. The idea seems to be based on equating "basic belief" with "belief not based on evidence." That would seem to allow the infinite regress argument to be turned against evidentialism.

However, this ignores the fact that Feldman (or someone with similar views) would argue that basic beliefs are supported by evidence to the extent that they're a proper response to experience and not defeated by other evidence. Furthermore, many people would reject other premises of the "infinite regress argument against evidentialism," particularly coherentists rejecting premise 4. This by itself doesn't prove that evidentialism is true. It could still be wrong for other reasons. But at the very least, the case against it is nowhere near as clear-cut as some anti-evidentialist philosophers would like you to believe.


Note: I'm very much open to input on how to handle future installments in the series. In particular, I'm not sure how much information to try to cram into one post (this could've easily been two), and I'm not sure if the next post should cover nonevidentialist epistemologies, or if I should just skp straight to skepticism.

Comments (82)

Comment author: jockocampbell 15 November 2013 09:58:20PM *  4 points [-]

Philosophy seems to have made little progress defining knowledge since Plato's 'justified true belief'. I concur with this definition given three, hopefully minor caveats:

1) Beliefs and therefore knowledge are not understood as restricted to humans. This perhaps requires that 'beliefs' be replaced with 'expectations'. 'Expectation' or expected value is a property of any model in the form of a probability distribution. The expected value of the 'ignorance' of such a model is its information entropy. It is the amount of information required to move the model to certainty through Bayesian updating. Entropy is information and all information is defined as the negative log of a probability. (See wikipedia page http://en.wikipedia.org/wiki/Self-information) The inverse of entropy is a probability; the value of the entropy in bits raised to the negative two power. Thus if the information entropy of a model is 3 bits, the inverse probability would be one eighth. (It would be easier writing this if some mathematical symbols were available). As this probability is the inverse of a model's ignorance I suggest it be considered as a definition of knowledge. Thus knowledge would be defined as a property of models and would encompass a wider range of natural phenomena including the knowledge within an organism's genetic model.

2) 'Justified' be understood in the Bayesian sense as justified by the evidence. Justified in a Bayesian context is not absolute but refers to degrees of plausibility in the form of Bayes factors or 'odds '. An early use of Bayes factors was by Turing in his cracking of the enigma code; he needed a measure of 'justification ' for deciding if a given key combination cracked a given code variation.

3) 'True' be dropped from the definition. Knowledge, especially scientific knowledge, deals with degrees of plausibility given uncertain information. Logic involving true and false values is a special case of Bayesian probability (where values are restricted to only 0 and 1; see Jaynes, Probability theory: the logic of science). The necessary constraint on the definition is therefore accomplished with 'justified' as described above.

After these alterations knowledge is defined as justified expectations.

Comment author: ChrisHallquist 15 November 2013 10:51:28PM *  3 points [-]

Two points:

1) Have you read Gettier's paper "Is Justified True Belief Knowledge?"? I recommended it; it seems to create problems for the JTB analysis of knowledge even assuming a Bayesian understanding of "justified."

2) You're misunderstanding the purpose of "true" in the JTB definition. It's not a matter of assigning probability 1 to a proposition, it's a matter of the proposition actually being true. As Eliezer would say, don't confuse uncertainty in the map with uncertainty in the territory. Pick your favorite case of a scientific theory that was once well supported by the evidence, but turned out to be false. Back when available evidence supported it, did scientists know it was true?

Comment author: komponisto 16 November 2013 10:11:50PM 2 points [-]

1) Have you read Gettier's paper "Is Justified True Belief Knowledge?"? I recommended it; it seems to create problems for the JTB analysis of knowledge even assuming a Bayesian understanding of "justified."

As I argued in this comment from 2011, the intuitive reaction to the Gettier scenario is based on a probability-theoretic mistake analogous to the conjunction fallacy (you might call it the "disjunction fallacy").

Comment author: Manfred 18 November 2013 11:16:02AM *  0 points [-]

2) You're misunderstanding the purpose of "true" in the JTB definition. It's not a matter of assigning probability 1 to a proposition, it's a matter of the proposition actually being true.

Yeah, but the trouble is that we don't know if a non-tautological statement is true or not. 'S like we have some kind of uncertainty or incomplete information. So in order to evaluate what we know, it seems like rather than trying to make it depend on what's true or not, we could use some kind of system for reasoning under uncertainty.

Comment author: pragmatist 18 November 2013 11:37:23AM 4 points [-]

I don't see the problem. Sure, we can't establish with complete certainty whether some proposition is true. It would then follow that we can't establish with complete certainty whether someone genuinely knows that proposition. But why require complete certainty for your knowledge claims? Just as our truth claims are uncertain and subject to revision, our knowledge claims are as well.

Comment author: Manfred 18 November 2013 01:17:08PM *  0 points [-]

So whether I know ("probabilistic JTB") something or not can depend on who's doing the evaluating, and what information they have? This ranges pretty far from the platonic assumptions behind Gettier problems.

Comment author: pragmatist 18 November 2013 01:28:52PM *  3 points [-]

No, that doesn't follow. Whether you know a proposition is an objective fact, just as the truth of a proposition is an objective fact. The probabilistic element is just that our judgments about knowledge are uncertain, just as our judgments about truth more generally are uncertain.

Example:

P: "Barack Obama is the American President."

This is a statement that is very probably true, but I don't assign it probability 1. Let's say its probability (for me) is 0.9.

KP: "Manfred knows that Barack Obama is the American President."

This statement assumes that P is in fact true. So the probability I assign this knowledge claim must be less than the probability I assign to P (assuming JTB). It must be less than 0.9. Now maybe someone else assigns a probability of 0.99 to P, in which case the probability they assign KP may well be greater than 0.9. So, yeah, the probabilities we attach to knowledge claims can depend on how much information we have. But that doesn't change the fact that KP is objectively either true or false. The mere fact that different people assign different probabilities to KP based on the information they have doesn't contradict this.

[NOTE: As a matter of fact, I don't think KP is determinately either true or false. I think what we mean by "knowledge" varies by context, so the truth of KP may also vary by context. For this sort of reason, I think an epistemology focused on the concept of "knowledge" is a mistake. Still, this is a separate issue from whether JTB makes sense.]

Comment author: Manfred 19 November 2013 04:40:13AM 0 points [-]

Man, I can really see why arguing about this stuff produces lots of heat and little light. Sorry about not being very constructive. Yes, you're right - there's a decent way to translate "JTB" into probabilistic terms, which is to put a probability value on the T, assume that I B if my probability for a statement is above some threshold, and temporarily ignore the definition issues with J. Then you can assign a statement like KP the appropriate probability if my probability is above the threshold, and 0 if my probability is below the threshold.

Comment author: hyporational 18 November 2013 06:05:57AM *  -1 points [-]

It's not a matter of assigning probability 1 to a proposition, it's a matter of the proposition actually being true.

I'm not sure I understand the difference. How is one supposed to have that information? I can imagine a proposition actually being true, but that's about it.

ETA: From the deepest pit of the following comment thread:

The way I read the quote is:

A proposition being true doesn't mean that it has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what the world is actually like, it is true.

This in turn could be read as:

A proposition being true doesn't mean that is has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what someone knows about the world with probability of 1, it is true.

Comment author: pragmatist 19 November 2013 08:43:32PM *  1 point [-]

Your first reading seems OK to me. Actually, I don't think it expresses the same thought as the quote you're responding to, but it is a plausible implication of that thought.

I'm not sure how you move from the first reading to the second one, though. In fact, I don't even understand the second reading, specifically this part:

and it coincides with what we know about the world with probability of 1

What do you mean when you say that the proposition "coincides with" what we know about the world? Do you just mean that the proposition expresses some aspect of our model of the world? But then how could it have probability 0.9 and yet our model have probability 1? That would be incoherent. But I can't come up with any other interpretation of what you mean by "coincides with" here (or, for that matter what you mean by "know", given that you're rejecting a JTB type analysis). Help?

Comment author: hyporational 19 November 2013 11:44:44PM *  -1 points [-]

a plausible implication of that thought.

That's what it's trying to be. Could you provide an example how you would express the exact same thought with different words? I'd like to know if I'm attacking a strawman here.

What do you mean when you say that the proposition "coincides with" what we know about the world?

If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim. Otherwise we're just playing tricks with our imaginations. As I tried to express before, I can imagine a true territory out there, but since nobody can verify it being there, i.e. have a perfect map, it's a pointless concept for the purposes we're discussing here.

That would be incoherent.

I'm trying to convey why a particular notion of truth is incoherent, but I'm not sure we agree about that yet.

Comment author: TheAncientGeek 20 November 2013 10:03:46AM 2 points [-]

Would the model still be 100% accurate if there were a label on P saying "only 90% certain".?

Comment author: hyporational 20 November 2013 10:19:59AM *  0 points [-]

Why don't you read the paper and try how that fits yourself, and then ask yourself, is this really what they intend?

Comment author: TheAncientGeek 20 November 2013 10:26:09AM 3 points [-]

I've read Gettier's famous apper, a long time ago, and he doesn't disuss models or probabilities.

Comment author: hyporational 20 November 2013 01:07:12PM 1 point [-]

Do you think it can be understood in a probabilistic framework, or will that just yield nonsense?

Comment author: TheAncientGeek 20 November 2013 01:32:46PM 1 point [-]

I've seen science types try to reinteprret mainstream philosophy in terns of probability and information several times, and it tends to go no where. Why not understand philosophy in its own terms?

Comment author: nshepperd 20 November 2013 12:30:59AM 0 points [-]

it's a pointless concept for the purposes we're discussing here.

Seems to me it's not pointless, because your failure to understand it is clearly holding you back...

Why are you failing to distinguish between "P" and "a person claiming P"? They are distinct things. Snow being white has nothing to do with who or what thinks snow is white. And there's no reason anyone needs a "perfect map" to talk about truth any more than a perfect map is needed to talk about snow being white.

Comment author: hyporational 20 November 2013 02:32:04AM *  0 points [-]

Quoting Chris:

It's not a matter of assigning probability 1 to a proposition, it's a matter of the proposition actually being true.

How would you interpret "actually being true" here? Say you have evidence for a proposition that makes it 0.9 probable. How would you establish that the proposition is also true? (Understand that I'm not saying you should.)

Comment author: TheAncientGeek 20 November 2013 10:38:47AM *  2 points [-]

Interpreting the meaning of "is true" and establishing that something "is true" are two different things -- namely, semantics and epistemology. It's common in science to sidestep semantic questions with operational answers, but that doesn't necessarily work in other areas.

Comment author: hyporational 20 November 2013 12:55:42PM 1 point [-]

Can you give more examples of such sidestepping where it doesn't work?

Comment author: TheAncientGeek 20 November 2013 01:20:55PM 1 point [-]

It's more a case of noting that there is no reason for it to work everywhere, and no evidene that it works outside of special cases.

Comment author: nshepperd 20 November 2013 05:18:43AM 0 points [-]

If you have evidence that makes P 90% probable, then your evidence has established a 90% chance of P being true (which is to say, you are uncertain whether P is true or not, but you assign 90% of your probability mass to "P is true", and 10% to "P is false"). The definition of "truth" that makes this work is very simple: let "P" and "P is true" be synonymous.

Comment author: hyporational 20 November 2013 05:46:20AM 0 points [-]

I agree with you here completely. I was just wondering if particular philosophers had something more nonsensical in mind.

Comment author: somervta 20 November 2013 06:03:36AM 0 points [-]

Perhaps. For the purposes of 'knowledge', whether or not you actually have knowledge of X depends on whether or not X is true, so knowledge is dependent on more than just your state of mind.

Someone upthread asked how you can "possibly have" the information that X is true, and in a sense you can't, you can only get more certain of it.

Did any of that help?

Comment author: hyporational 20 November 2013 12:40:48AM *  0 points [-]

Why are you failing to distinguish between "P" and "a person claiming P"? They are distinct things.

I'm not, I know they're distinct things. It seems to me you misundertood me. What's with the tone?

And there's no reason anyone needs a "perfect map" to talk about truth any more than a perfect map is needed to talk about snow being white.

I know that.

Comment author: nshepperd 20 November 2013 01:16:40AM 0 points [-]

So if you agree about that, why are you saying things like

If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim.

How is the "if" connected to the "then" of that sentence? Your thinking isn't making any sense to me.

Comment author: hyporational 20 November 2013 01:23:06AM 0 points [-]

That quote shouldn't make sense to you, and it's not my thinking. Keep in mind I'm not endorsing a notion of truth here, I'm questioning it.

Comment author: hyporational 20 November 2013 01:54:47AM -1 points [-]

Snow being white has nothing to do with who or what thinks snow is white.

White and snow wouldn't exist without someone thinking about them so I'm not sure what you're trying to say here.

Comment author: nshepperd 20 November 2013 05:19:44AM 0 points [-]

What goes on in mountains when no-one is thinking about them...?

Comment author: pragmatist 18 November 2013 07:08:01AM *  0 points [-]

Don't you agree that you (and in fact all of us) assign probability less than 1 to many propositions that are in fact true? If you agree with this, then you acknowledge a difference between truth and assigning probability 1.

As for how one is supposed to have information about a proposition being actually true -- through evidence causally associated with the truth of the proposition. This doesn't mean that the evidence needs to be sufficient to raise one's probability assignment all the way to 1. Assuming it is true that Barack Obama is currently the President of the United States, I have lots of evidence providing me information of this truth. Yet I'm not 100% certain about the truth of this proposition (although I'm pretty close).

Comment author: hyporational 18 November 2013 07:28:46AM *  0 points [-]

Don't you agree that you (and in fact all of us) assign probability less than 1 to many propositions that are in fact true?

I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are "actually true"?

Are you saying that truth is anything it's possible to believe with high probability given the evidence that can be acquired?

Assuming it is true that Barack Obama is currently the President of the United States, I have lots of evidence providing me information of this truth. Yet I'm not 100% certain about the truth of this proposition (although I'm pretty close).

What would it mean to establish the knowledge that this proposition is actually true?

Comment author: pragmatist 18 November 2013 09:48:40AM *  0 points [-]

I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are "actually true"?

No, it doesn't. I mean, any proposition to which I assign a non-extremal probability could be assigned a higher probability if I look for more evidence. So that criterion doesn't pick out a useful class of propositions.

Are you saying that truth is anything it's possible to believe with high probability given the evidence that can be acquired?

No. There are propositions which one can (rationally) believe with high probability given the available evidence that are nonetheless false.

I think the problem with what you're doing is that you're trying to analyze truth in terms of probability assignment. That's backwards. The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false. When I say that I assign a probability of 0.6 to a particular proposition, I'm expressing my uncertainty about the truth of the proposition, or the odds at which I'd take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).

So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.

What would it mean to establish the knowledge that this proposition is actually true?

If you mean establish it with absolute certainty, then I don't think that's possible. If you mean establish it with a high degree of confidence, then it would just amount to gathering a large amount of evidence that confirms the proposition.

There's no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that "Barack Obama is President" is a true statement). If you know how to do the former, then you know how to do the latter. Adding "is actually true" at the end doesn't produce any new epistemic requirements.

Comment author: hyporational 18 November 2013 10:55:43AM *  0 points [-]

I think the problem with what you're doing is that you're trying to analyze truth in terms of probability assignment. That's backwards.

Not really. If you can't establish what truth is, then probability obviously can't be an expression of your beliefs in relation to truth.

The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false.

The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.

When I say that I assign a probability of 0.6 to a particular proposition, I'm expressing my uncertainty about the truth of the proposition, or the odds at which I'd take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).

I'd say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next. Why do induction and empiricism work? Because they have worked before, not because you're presupposing a true world out there.

So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.

That's why we need axioms. It seems to me axioms are not the kind of truth that JTB presupposes. I'm not saying we don't need mathematical truths or axioms that are agreed upon. I'm saying that presupposing the true territory out there doesn't add anything to the process of probabilistic reasoning.

If you mean establish it with absolute certainty, then I don't think that's possible.

That's what I mean, and that's what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.

There's no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that "Barack Obama is President" is a true statement). If you know how to do the former, then you know how to do the latter. Adding "is actually true" at the end doesn't produce any new epistemic requirements.

I agree.

Comment author: pragmatist 18 November 2013 01:08:00PM *  0 points [-]

The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.

I don't know what you mean by "platonic truth". I suspect you are thinking of something much more metaphysically freighted than necessary. The kind of truth I'm talking about (and I think most people are talking about when they say "truth") very much can be established. For instance, I can establish what the truth is about the capital of Latvia by looking up Latvia on Wikipedia. I just did, and established the truth of the proposition "The capital of Latvia is Riga." Sure this doesn't establish the truth with 100% certainty, but why should that be the standard for truth being a useful notion?

Truth is not something you need God-like noumenal superpowers to determine. It's something that can be determined with the very human superpowers of empirical investigation and theory-building.

I'd say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next.

I assign probabilities to past events, to empirically indistinguishable scientific hypotheses, to events that are in principle unobservable for me. Am I just doing it wrong, in your opinion?

That's what I mean, and that's what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.

What kind of a notion of truth? The kind that requires absolute certainty? But I'm not aware of anyone arguing that one needs that kind of truth for the JTB account, or to make sense of probabilistic reasoning. Why do you think that kind of notion of truth is needed?

Comment author: hyporational 18 November 2013 01:14:08PM *  0 points [-]

I'm not arguing for any kind of notion of truth. I thought the kind of notion of truth JTB seems to be assuming is confusing as hell, and I wanted clarification for what it was trying to say.

My objection started from here:

2) You're misunderstanding the purpose of "true" in the JTB definition. It's not a matter of assigning probability 1 to a proposition, it's a matter of the proposition actually being true.

Can you get back to that, because I don't understand you anymore?

Comment author: pragmatist 18 November 2013 01:19:44PM *  1 point [-]

OK, I guess we were talking past each other. What is it about that particular claim that you find objectionable? I thought what you were objecting to was the notion that a proposition being true is distinct from it being assigned probability 1, and I was responding to that. But are you objecting to something else?

Is your objection just that you don't understand what people mean by "true" in the JTB account? I don't think they're committed to any particular notion, except for the claim that justification and truth are distinct. A belief can be highly justified and yet false, or not at all justified and yet true. Pretty much any of the theories discussed here would work. My personal preference is deflationism.

Comment author: jockocampbell 16 November 2013 10:04:45PM *  -1 points [-]

Thanks for the link to Gettier's paper.

It seems he considers that the statement 'S knows that P' can have only two possible values, true or false. This may have been a historical tradition within philosophy since Plato but it seems to rule out many usual usages of 'knowledge' such as 'I know a little about that'.

As noted by Edwin Jaynes Bayesians usually consider knowledge in terms of probability:

In our terminology, a probability is something that we assign, in order to represent a state of knowledge.

In his great text on Bayesian inference, Probability theory: the logic of science, he demonstrates that Aristotelian logic is a limiting case of probability theory; The results of logic are the results of probability theory where the value of probabilities are restricted to only 0 and 1. I believe this probabilistic approach provides a richer context for knowledge in that there are degrees of certainty. My reworking of Plato's definition attempted to transition it to this context.

Pick your favorite case of a scientific theory that was once well supported by the evidence, but turned out to be false. Back when available evidence supported it, did scientists know it was true?

Perhaps those scientist from the past should have said it had a high probability of being true. I may be misunderstanding you but I do not believe science can produce certainty and this seems to be a common view. I quote wikipedia.

A scientific theory is empirical, and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism.

Comment author: VAuroch 18 November 2013 09:38:03AM -1 points [-]

You tried to define knowledge as simply 'justified belief'. The example scientific theory was believed to be true, and that belief was justified by the evidence then available. But, as we now know, that belief was false. By your definition, however, they can still be said to have 'known' the theory was true.

That is the problem with the definition not including the 'true' caveat.

Comment author: jockocampbell 18 November 2013 08:24:49PM *  0 points [-]

You misunderstand me. I did not say it was

'known' the theory was true.

I reject the notion that any scientific theory can be known to be 100% true, I stated:

Perhaps those scientist from the past should have said it had a high probability of being true.

As we all know now Newton's theory of gravitation is not 100% true and therefore in a logical sense it is not true at all. We have counter examples as in the shift of Mercury's perihelion which it does not predict. However the theory is still a source of knowledge, it was used by NASA to get men to the moon.

Perhaps considering knowledge as an all or none characteristic is unhelpful.

If we accept that a theory must be true or certain in order to contain knowledge it seems to me that no scientific theory can contain knowledge. All scientific theories are falsifiable and therefore uncertain.

I also consider it hubris to think we might ever develop a 'true' scientific theory as I believe the complexities of reality are far beyond what we can now imagine. I expect however that we will continue to accumulate knowledge along the way.

Comment author: VAuroch 18 November 2013 08:50:34PM -2 points [-]

No, Newton's theory of gravitation does not provide knowledge. Belief in it is no longer justified; it contradicts the evidence now available.

However, prior to relativity, the existing evidence justified belief in Newton's theory. Whether or not it justified 100% confidence is irrelevant; if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.

So, using the definition you gave, physicists everywhere (except one patent office in Switzerland) knew Newton's theory to be true, because the belief "Newton's theory is accurate" was justified. However, we now know it to be false.

Currently, we have a different theory of gravity. Belief in it is justified by the evidence. By your standard, we know it to be true. That's patently ridiculous, however, since physicists still seek to expand or disprove it.

Comment author: jockocampbell 18 November 2013 09:21:20PM 1 point [-]

I agree with your statement:

if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.

However I think your are misunderstanding me.

I don't think we require 100% justified confidence for there to be knowledge I believe knowledge is always a probability and that scientific knowledge is always something less than 100%.

I suggest that knowledge is justified belief but it is always a probability less than 100%. As I wrote: I mean justified in the Bayesian sense which assigns a probability to a state of knowledge. The correct probability to assign may be calculated with the Bayesian update.

This is a common Bayesian interpretation. As Jaynes wrote:

In our terminology, a probability is something that we assign, in order to represent a state of knowledge.

Comment author: VAuroch 18 November 2013 10:16:48PM 0 points [-]

I am fairly certain I understand your position better than you yourself do. You have eliminated the distinction between belief and knowledge entirely, thus rendering the word knowledge useless. Tabooing is not an argument; this conclusion is not valid.

You have repeatedly included in your argument false statements, even under your own interpretation. You have also misinterpreted quotes to back up your argument, such as misunderstanding the statement

In our terminology, a probability is something that we assign, in order to represent a state of knowledge.

To mean that knowledge is a probability, rather than the actual meaning of 'probability quantifies how much we know that we do not know'.

You are in a state of confusion, though you may not have realized it, and I have no interest in continuing to point out the flawed foundations if you will ignore the demonstration. I am done here.

Comment author: linkhyrule5 19 November 2013 05:08:19AM 0 points [-]

From what I can see, you're arguing entirely over the definition of 'knowledge' instead of just splitting up individual concepts and giving them different names.

Comment author: VAuroch 19 November 2013 06:09:17AM *  -1 points [-]

I basically agree, we are. What I'm trying to do is to maintain knowledge as a separate thing from belief. I don't have particular attachment to this definition of knowledge (as pointed out above, "justified true belief" is a little simplistic), but I can't find any way that jocko's version is different from straight-up belief.

Comment author: [deleted] 13 June 2014 09:39:31AM -1 points [-]

Things like 'Philosophy seems to have made little progress defining knowledge since Plato's 'justified true belief'. I concur with this definition given three, hopefully minor caveats:"" and ""philosophy is a diseased discipline" get my every time. Do you actually know about the history of philosophy since Plato? It is absurd that statements like this can go unchallenged.

Comment author: somervta 15 November 2013 11:51:32AM 4 points [-]

I'm not sure if the next post should cover nonevidentialist epistemologies, or if I should just skp straight to skepticism.

Well, I'd love it if you covered nonevidentialist epistemologies, as that's the area with which I'm least familiar.

Comment author: hyporational 18 November 2013 05:51:31AM *  0 points [-]

I'd definitely want to see them as well. Know your enemy and all that :)

Comment author: Kaj_Sotala 17 November 2013 08:40:28AM 3 points [-]

I'm not sure if the next post should cover nonevidentialist epistemologies,

I would be interested in reading this, and liked the discussion about different epistomologies in this post.

Comment author: TheOtherDave 15 November 2013 04:43:18PM 6 points [-]

Note: I'm very much open to input on how to handle future installments in the series. In particular, I'm not sure how much information to try to cram into one post (this could've easily been two),

FWIW, this seemed like a good level of information density for a single post, though that impression might depend on the fact that most of the material isn't new to me.

and I'm not sure if the next post should cover nonevidentialist epistemologies, or if I should just skip straight to skepticism.

I'm not sure what I could tell you that would legitimately contribute to your confidence one way or the other.

That said, I will admit that I've never entirely understood how reliabilism/causalism is supposed to be anti-evidential in the first place (doesn't the existence of a reliable/causal process for generating true beliefs count as evidence for the truths of the beliefs generated by that process?), so I may be something of a lost cause in that area.

Comment author: AspiringRationalist 16 November 2013 09:17:06PM 4 points [-]

Given the difficulty philosophers have had defining what is knowledge, perhaps it's better to dissolve the question.

Comment author: TheAncientGeek 29 November 2013 09:36:33AM *  0 points [-]

Actually, the JTB definition stood up for 2500 years (the first serious challenge was Gettier, in, iIRC, the 80s) That looks extremely robust to me. as definitions go.

Of course this is LW, so you could be using "define" to mean "explain"...

Comment author: Leonhart 16 November 2013 06:14:08PM *  2 points [-]

Chris, to assist me in joining this to LW-ideas: when Feldman speaks of "spontaneously formed beliefs" and "proper responses to experience", is he pointing to the same notion EY was pointing to with "dynamics" in Created Already in Motion? A transformation of experience into belief that happens automatically and outside awareness?

Comment author: ChrisHallquist 16 November 2013 06:30:25PM *  1 point [-]

Maybe? They're not exactly the same thing, but maybe you could see them "pointing at" the same basic idea. Or that the kind of spontaneous belief formation/responses to experience Feldman talks about are examples of what Eliezer calls "dynamics."

EDIT: Another way to put it is that the problems being addressed are connected, but the terminology doesn't map well.

Comment author: Vladimir_Nesov 15 November 2013 06:12:25PM 5 points [-]

The mode of discussion present in this book (based on your summary) seems like sloppy reasoning, full of leaky abstractions and attempts to build something out of them. Perhaps the book is mostly about merely describing common forms of sloppy reasoning, and I suppose a sufficiently careful reader could avoid confusion, but that would be despite the text, not because of it. I think it's a bad idea to endorse discussion in this mode.

(Confused discussion is useful for representing a particular problem in the process of figuring out how to think about it, if it's the only form in which motivation for the problem is available, because confused discussion is more informative than refusing to admit that you have any form of data to work with. But it seems to be usually counterproductive as a mode of reasoning, including reasoning about a confusing problem. Confusion only works as a representation of the problem, not as a tool for resolving it. So we could consider any given confused position about epistemology as an object of study, but it seems to be a bad idea to use the other confused positions to reason about it.)

Comment author: Kaj_Sotala 17 November 2013 08:47:32AM 3 points [-]

The mode of discussion present in this book (based on your summary) seems like sloppy reasoning

I didn't get this impression. Could you name some examples? I wonder whether we're interpreting the text differently.

Comment author: ChrisHallquist 15 November 2013 06:40:49PM 6 points [-]

I've skated over a lot of the detail in many of the arguments, which may make them look sloppier than they really are. I'm trying to summarize 70 pages of material in one blog post, remember. In fact Feldman tries very hard to present the arguments carefully, at least by the standards of careful reasoning that currently prevail in philosophy.

On the other hand, from a LessWrong point of view, the entire thing could very easily look like it's based on a failure to realize that certain generalizations are always going to be leaky no matter what you do.

Comment author: TheAncientGeek 15 November 2013 06:56:15PM *  -2 points [-]

You have something better in mind?

Comment author: torekp 16 November 2013 09:11:31PM 2 points [-]

How much epistemological "debate" vanishes if we just insist on distinguishing the questions, when is a belief rational, and when is it reliable? It seems that coherentists are simply more interested in the former question and foundationalists in the latter.

Comment author: ChrisHallquist 17 November 2013 04:58:58AM 0 points [-]

If that's the big difference, they're completely unaware of it.

Comment author: CronoDAS 20 November 2013 04:44:30AM *  1 point [-]

No person could have an infinite series of basic beliefs, so no justified belief could have an evidential chain that is an infinite regress of beliefs (that is, not (b)).

Why not? It's not actually hard to specify an infinite number of axioms to use in a first-order logic system. For example, "For any well-formed formula {x}, {x} -> {x} is an axiom" represents an infinite number of axioms by using a single sentence. So I don't see why a person can't have an infinite series of beliefs.

Comment author: ChrisHallquist 20 November 2013 06:05:47AM 1 point [-]

I don't have a dog in this fight, but you may be happy to know that there actually is a name for rejecting the assumption you quoted, "infinitism." In undergrad, my epistemology professor told us he knew of "one guy" who held this position. Make of that what you will.

Comment author: somervta 20 November 2013 05:25:27AM 0 points [-]

infinite in the "the number of beliefs I hold is growing without bound" sense, but htat's not the relevant sense here. Growth over time doesn't cut it for this - in order to have an infinite regress of beliefs, you'd have to have all infinity of them at the same time. The set of all your beliefs would have to have cardinality , AFAICT (oddly enough, I've never seen a mathematical analysis of this, but I can't see why it wouldn't be like this.)

Comment author: yli 19 November 2013 06:02:41AM *  1 point [-]

Thanks, this looks to be a good summary of what I'm not missing :)

Comment author: aquaticko 29 November 2013 04:57:10AM 0 points [-]

It seems as though just about all knowledge is inherently coherentist. There is no position outside of myself and my associated framework of knowledge-claiming processes from which I can claim knowledge. How I claim to know what I claim to know will always depend on the way in which I came to claim to know what I know (sorry, wordy). That certainly needn't be circular, but it does seem to make a lie out of point (2) of Feldman's modest foundationalism.

Going back to Descartes, any non-solipsistic knowledge claims are only considered simple JTB assuming standard conditions for observation, i.e. even assuming a realist position, all knowledge is fundamentally inferential and inductive, even formal logic and mathematic. The odds that both formal logic and mathematics are true and not some trick played on the mechanics of the universe seem good, but of course, even that is an inductive inference. All systems which are deserving of the name are coherent and self-consistent, and some are even consistent with other systems, but I am at least unaware, of a system which is consistent and coherent with all other systems, i.e., contains all other systems....Oh boy, getting into the universal set problem. Sorry 'bout that.

Comment author: TheAncientGeek 05 December 2013 01:27:22PM 0 points [-]

It seems as though just about all knowledge is inherently coherentist. There is no position outside of myself and my associated framework of knowledge-claiming processes from which I can claim knowledge.

I don't see how the second sentence supports coherentism. For that matter, I also don;t see how it goes against foundationalism, Foundationalists typically believe that sensory evidence, not an "external view", is their foundation.

How I claim to know what I claim to know will always depend on the way in which I came to claim to know what I know (sorry, wordy). That certainly needn't be circular, but it does seem to make a lie out of point (2) of Feldman's modest foundationalism.

Again, I don;t see the connection. The foundationalist claim is that propositions that need justification can get it from propositions that don't.need justification. It isn't a claim that there are way s of knowing things that are fundamentally unconditioned by being known in particular ways. "Snow is white" has to be expressed in some language or other, but that doesn't render it doubtful.

Going back to Descartes, any non-solipsistic knowledge claims are only considered simple JTB assuming standard conditions for observation, i.e. even assuming a realist position, all knowledge is fundamentally inferential and inductive, even formal logic and mathematic.

How do you know that "I am currently under standard observation conditions" isn't foundationally justified?

Going back to Descartes, any non-solipsistic knowledge claims are only considered simple JTB assuming standard conditions for observation, i.e. even assuming a realist position, all knowledge is fundamentally inferential and inductive, even formal logic and mathematic. The odds that both formal logic and mathematics are true and not some trick played on the mechanics of the universe seem good, but of course, even that is an inductive inference. All systems which are deserving of the name are coherent and self-consistent, and some are even consistent with other systems, but I am at least unaware, of a system which is consistent and coherent with all other systems, i.e., contains all other systems....Oh boy, getting into the universal set problem. Sorry 'bout that.