You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

endoself comments on Bayesian Epistemology vs Popper - Less Wrong Discussion

-1 Post author: curi 06 April 2011 11:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (226)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 07 April 2011 04:15:22AM *  0 points [-]

I use the word prove because I'm doing it deductively in math. I already linked you to the 2+2=3 thing, I believe. Also, the question of how I would, for example, change AI design if a well-known theorem is wrong (pretend it is the future and the best theorems proving Bayesianism are better-known and I am working on AI design) is both extremely hard to answer and unlikely to be necessary. Well unlikely is the wrong word; what is P(X | "There are no probabilities")? :)

Comment author: calef 07 April 2011 05:10:07AM *  1 point [-]

Probably the most damning criticism you'll find, curi, is that fallibilism isn't useful to the Bayesian.

The fundamental disagreement here is somewhere in the following statement:

"There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true."

I suspect your disagreement is in one of several places.

1) You disagree that there even exist epistemically "true" facts. 2) That we can determine how likely something is to be true. or 3) That likelihood of being true (as defined by us) is reason to believe the truth of something.

I can actually flesh out your objections to all of these things.

For 1, you could probably successfully argue that we aren't capable of determining if we've ever actually arrived at a true epistemic statement because real certainty doesn't exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God--i.e. shaky to the point of not concerning oneself with them all together.

2 basically ties in with the above directly.

3 is a whole 'nother ball game, and I don't think it's really been broached yet by anyone, but it's certainly a valid point of contention. I'll leave it out unless you'd like to pursue it.

The Bayesian counter to all of these is simply, "That doesn't really do anything for me."

Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.

That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren't-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.

Basically, what I'm trying to say is that all you're ever going to get out of a Bayesian is, "No, I disagree. I think we can have certainty." And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.

You've already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren't trying to find certainty, just say we're trying to minimize criticism.

This probably hasn't been a very satisfying answer. I certainly agree it's useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don't think there's anything I'm absolutely certain about, because to claim so would be silly.

Comment author: endoself 07 April 2011 05:32:57AM 1 point [-]

Small nitpick: I don't like your use of the word 'certainty' here. Especially in philosophy, it has too much of a connotation of "literally impossible for me to be wrong" rather than "so ridiculously unlikely that I'm wrong that we can just ignore it", which may cause confusion.

Comment author: calef 07 April 2011 05:40:16AM 0 points [-]

Where don't you like it? I don't think anyone actually argues for your first definition, because, like I said, it's silly. I think curi's point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.

My point is that the things we are "certain" about (as per your second definition) probably coincide almost exactly with "statements without criticism" as per curi's definition(s).

Comment author: endoself 07 April 2011 06:01:44AM 2 points [-]

It is a silly definition, but people are silly enough that I hear it often enough to be wary of it.

My point is that the things we are "certain" about (as per your second definition) probably coincide almost exactly with "statements without criticism" as per curi's definition(s).

I interpreted this as the first definition. I guess we should see what curi says.

Comment author: Peterdjones 12 April 2011 08:34:02PM 1 point [-]

people genrally try to have their cake and eat it: they want certainty to mean "cannot be wrong", but only on the basis that they feel sure.