Bayesian Epistemology vs Popper

-1 Post author: curi 06 April 2011 11:50PM

 

 

I was directed to this book (http://www-biba.inrialpes.fr/Jaynes/prob.html) in conversation here:

http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3ug7?context=1#3ug7

I was told it had a proof of Bayesian epistemology in the first two chapters. One of the things we were discussing is Popper's epistemology.

Here are those chapters:

http://www-biba.inrialpes.fr/Jaynes/cc01p.pdf

http://www-biba.inrialpes.fr/Jaynes/cc02m.pdf

I have not found any proof here that Bayesian epistemology is correct. There is not even an attempt to prove it. Various things are assumed in the first chapter. In the second chapter, some things are proven given those assumptions.

Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false. I agree so far. Next it says "we will grant that it had a certain degree of validity". But I will not grant that. Popper's epistemology explains that *this is a mistake* (and Jaynes makes no attempt at all to address Popper's arguments). In any case, simply assuming his readers will grant his substantive claims is no way to argue.

The next sentences blithely assert that we all reason in this way. Jaynes' is basically presenting the issues of this kind of reasoning as his topic. This simply ignores Popper and makes no attempt to prove Jaynes' approach is correct.

Jaynes goes on to give syllogisms, which he calls "weaker" than deduction, which he acknowledges are not deductively correct. And then he just says we use that kind of reasoning all the time. That sort of assertion only appeals to the already converted. Jaynes starts with arguments which appeal to the *intuition* of his readers, not on arguments which could persuade someone who disagreed with him (that is, good rational arguments). Later when he gets into more mathematical stuff which doesn't (directly) rest on appeals to intution, it does rest on the ideas he (supposedly) established early on with his appeals to intuition.

The outline of the approach here is to quickly gloss over substantive philosophical assumptions, never provide serious arguments for them, take them as common sense, do not detail them, and then later provide arguments which are rigorous *given the assumptions glossed over earlier*. This is a mistake.

So we get, e.g., a section on Boolean Algebra which says it will state previous ideas more formally. This briefly acknowledges that the rigorous parts depend on the non-rigorous parts. Also the very important problem of carefully detailing how the mathematical objects discussed correspond to the real world things they are supposed to help us understand does not receive adequate attention.

Chapter 2 begins by saying we've now formulated our problem and the rest is just math. What I take from that is that the early assumptions won't be revisted but simply used as premises. So the rest is pointless if those early assumptions are mistaken, and Bayesian Epistemology cannot be proven in this way to anyone who doesn't grant the assumptions (such as a Popperian).

Moving on to Popper, Jaynes is ignorant of the topic and unscholarly. He writes:

http://www-biba.inrialpes.fr/Jaynes/crefsv.pdf

> Karl Popper is famous mostly through making a career out of the doctrine that theories may not be proved true, only false

This is pure fiction. Popper is a fallibilist and said (repeatedly) that theories cannot be proved false (or anything else).

It's important to criticize unscholarly books promoting myths about rival philosophers rather than addressing their actual arguments. That's a major flaw not just in a particular paragraph but in the author's way of thinking. It's especially relevant in this case since the author of the books tries to tell us about how to think.

Note that Yudkowsky made a similar unscholarly mistake, about the same rival philosopher, here:

http://yudkowsky.net/rational/bayes

> Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning.  Karl Popper's idea that theories can be definitely falsified, but never definitely confirmed

Popper's philosophy is not falsificationism, it was never the most popular, and it is fallibilist: it says ideas cannot be definitely falsified. It's bad to make this kind of mistake about what a rival's basic claims are when claiming to be dethroning him. The correct method of dethroning a rival philosophy involves understanding what it does say and criticizing that.

If Bayesians wish to challenge Popper they should learn his ideas and address his arguments. For example he questioned the concept of positive support for ideas. Part of this argument involves asking the questions: 'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me.

Comments (226)

Comment author: prase 07 April 2011 01:30:14PM 19 points [-]

I have skimmed through the comments here and smelled a weak odour of a flame war. Well, the discussion is still rather civil and far from a flame war as understood on most internet forums, but it somehow doesn't fit well within what I am used to see here on LW.

The main problem I have is that you (i.e. curi) have repeatedly asserted that the Bayesians, including most of LW users, don't understand Popperianism and that Bayesianism is in fact worse, without properly explaining your position. It is entirely possible, even probable, that most people here don't actually get all subtleties of Popper's worldview. But then, a better strategy may be to first write a post which explains these subtleties and tells why they are important. On the other hand, you don't need to tell us explicitly "you are unscholarly and misinterpret Popper". If you actually explain what you ought to (and if you are right about the issue), people here will likely understand that they were previously wrong, and they will do it without feeling that you seek confrontation rather than truth - which I mildly have.

Comment author: Desrtopa 07 April 2011 02:49:52PM *  3 points [-]

Upvoted and agreed. I feel at this point like further addressing the discussion on present terms would be simply irresponsible, more likely to become adversarial than productive. If curi wrote up such a post, it would hopefully give a meaningful place to continue from.

Edit: It seems that curi has created such a post. I'm not entirely convinced that continuing the discussion is a good idea, but perhaps it's worth humoring the effort.

Comment author: TheOtherDave 07 April 2011 02:24:22PM 2 points [-]

For what it's worth, I have that feeling more than mildly and consequently stopped paying attention to the curi-exchange a while ago. Too much heat, not enough light.

I've been considering downvoting the whole thread on the grounds that I want less of it, but haven't yet, roughly on the grounds that I consider it irresponsible to do so without paying more careful attention to it and don't currently consider it worth paying more attention to.

Comment author: curi 07 April 2011 07:46:41PM 0 points [-]

By "properly explaining my position" I'm not sure what you want. Properly understanding it takes reading, say, 20 books (plus asking questions about them as you go, and having critical discussions about them, and so on). If I summarize, lots of precision is lost. I have tried to summarize.

I can't write "a (one) post" that explains the subtitles of Popper. It took Popper a career and many books.

Bayesianism has a regress/foundations problem. Yudkowsky acknowledges that. Popperism doesn't. So Popperism is better in a pretty straightforward way.

On the other hand, you don't need to tell us explicitly "you are unscholarly and misinterpret Popper".

But they were propagating myths about Popper. They were unscholarly. They didn't know wtf they were talking about, not even the basics. Basically all of Popper's books contradict those myths. It's really not cool to attribute positions to someone he never advocated. This mistake is easy to avoid by the method: don't publish about people you haven't read. Bad scholarship is a big deal, IMO.

Comment author: Desrtopa 08 April 2011 09:34:47PM *  4 points [-]

Bayesianism has a regress/foundations problem. Yudkowsky acknowledges that. Popperism doesn't. So Popperism is better in a pretty straightforward way.

Any system with axioms can be infinitely regressed or rendered circular if you demand that it justify the axioms. Critical Rationalism has axioms, and can be infinitely regressed.

You were upvoted in the beginning for pointing out gaps in scholarship and raising ideas not in common circulation here. You yourself, however, have demonstrated a clear lack of understanding of Bayesianism, and have attracted frustration with your own lack of scholarship and confused arguments, along with failure to provide good reasons for us to be interested in the prospect of doing this large amount of reading you insist is necessary to properly understand Popper. If doing this reading were worthwhile, we would expect you to be able to give a better demonstration of why.

Comment author: prase 07 April 2011 08:51:35PM 2 points [-]

I have tried to summarize.

I acknowledge that, although I would have prefered if you did that before you have written this post.

I can't write "a (one) post" that explains the subtitles of Popper. It took Popper a career and many books.

Could be five posts.

Even if such a defense can be sometimes valid, it is too often used to defend confused positions (think about theology) to be much credible.

Comment author: curi 07 April 2011 08:52:52PM -1 points [-]

It would need to be 500 posts.

But anyway, they are written and published. By Popper not me. They already exist and they don't need to be published on this particular website.

Comment author: [deleted] 07 April 2011 08:55:41PM 4 points [-]

One thing you could do is write a post highlighting a specific example where Bayes is wrong and Popper is right. A lot of people have asked for specific examples in this thread; if you could give a detailed discussion of one, that would move the discussion to more fertile ground.

Comment author: curi 07 April 2011 08:57:01PM *  1 point [-]

Can you give me a link to a canonical essay on Bayesian epistemology/philosophy, and I'll pick from there?

Induction and justificationism are examples but I've been talking about them. I think you want something else. Not entirely sure what.

Comment author: [deleted] 07 April 2011 09:04:46PM 1 point [-]

It's not at all canonical, but a paper that neatly summarizes Bayesian epistemology is "Bayesian Epistemology" by Stephan Hartmann and Jan Sprenger.

Comment author: curi 07 April 2011 09:09:44PM 1 point [-]
Comment author: [deleted] 07 April 2011 09:14:16PM 1 point [-]

Excellent, thanks.

Comment author: prase 07 April 2011 08:57:01PM 3 points [-]

Following your advice expressed elsewhere, isn't the fact that the basics of Popperianism cannot be explained in five posts a valid criticism of Popperianism, which should be therefore rejected?

Comment author: curi 07 April 2011 08:59:26PM 1 point [-]

Why is that a criticism? What's wrong with that?

Also maybe it could be. But I don't know how.

And the basics could be explained quickly, to someone who didn't have a bunch of anti-Popperian biases, but people do have those b/c they are built into our culture. And without the details and precision then people complain about 1) not understanding how to do it, what it says 2) it not having enough precision and rigor

Comment author: prase 07 April 2011 09:06:08PM *  2 points [-]

Why is that a criticism?

Actually I don't know what constitutes a criticism in your book (since you never specified), but you have also said that there are no rules for criticism, so I suppose that it is a criticism. If not, then please say why it is not a criticism.

I am not going to engage in a discussion about my and your biases, since such debates rarely lead to an agreement.

Comment author: curi 07 April 2011 09:11:10PM *  0 points [-]

You can conjecture standards of criticism, or use the ones from your culture. If you find a problem with them, you can change them or conjecture different ones.

For many purposes I'm pretty happen with common sense notions of standards of criticism, which I think you understand, but which are hard to explain in words. If you have a relevant problem with the, you can say it.

Comment author: benelliott 07 April 2011 06:29:42AM *  5 points [-]

I gave a description of how a Bayesian sees the difference between "X supports Y" and "X is consistent with Y" in our previous discussion. I don't know if you saw it, you havn't responded to it and you aren't acting like you accepted it so I'll give it again here:

"X is consistent with Y" is not really a Bayesian way of putting things, I can see two ways of interpreting it. One is as P(X&Y) > 0, meaning it is at least theoretically possible that both X and Y are true. The other is that P(X|Y) is reasonably large, i.e. that X is plausible if we assume Y.

"X supports Y" means P(Y|X) > P(Y), X supports Y if and only if Y becomes more plausible when we learn of X. Bayes tells us that this is equivalent to P(X|Y) > P(X), i.e. if Y would suggest that X is more likely that we might think otherwise then X is support of Y.

Suppose we make X the statement "the first swan I see today is white" and Y the statement "all swans are white". P(X|Y) is very close to 1, P(X|~Y) is less than 1 so P(X|Y) > P(X), so seeing a white swan offers support for the view that all swans are white. Very, very weak support, but support nonetheless.

For a Popperian definition, you guys are allowed to criticise something right? In that case could we say that support for a proposition is logically equivalent to a criticism of its negation?

The whole 'there is no positive support' thing seems like an overreaction to the whole Cartesian 'I can prove ideas with certainty thing'. I agree that certain support is a flawed concept, but you seem to be throwing the baby out with the bathwater by saying uncertain support is guilty by association and should be rejected as well.

Also, I'm a little incredulous here, do you really reject the policeman's syllogism? Would you say he is wrong to chase the man down the road? If you encountered such a person, would you genuinely treat them as you would treat anyone else?

Comment author: curi 07 April 2011 07:07:21AM *  0 points [-]

I missed your comment. I found it now. I will reply there.

http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3uld?context=1#3uld

could we say that support for a proposition is logically equivalent to a criticism of its negation?

No. The negation of a universal theory is not universal, and the negation of an explanatory theory is not explanatory. So, the interesting theories would still be criticism only, and the uninteresting ones (e.g. "there is a cat") support only. And the meaning of "support" is rather circumscribed there.

If you want to say theories of the type "the following explanation isn't true: ...." get "supported" it doesn't contribute anything useful to epistemology. the support idea, as it is normally conceived, is still wrong, and this rescues none of the substance.

The other issue is that criticism isn't the same kind of thing as support. It's not in the same category of concept.

Yes I really reject the policeman's syllogism. In the sense of: I don't think the argument in the book is any good. There are other arguments which are OK for reaching the conclusion (but which rely on things the book left unstated, e.g. background knowledge and context. Without adding anything at all, no cultural biases or assumptions or hidden claims, and even doing our best to not use the biases and assumptions built into the English language, then no there isn't any way to guess what's more likely).

Comment author: Peterdjones 15 April 2011 03:08:22PM 1 point [-]

If the Policeman's argument is only valid in the light of background assumptions, why would they need to be stated? Surely we would only need to make the same tacit assumptions to agree with the conclusions. Everyday reasoning differs from formal logic in various ways, and mainly because it takes short cuts. I don't think that invalidates it.

Comment author: jimrandomh 07 April 2011 01:02:59AM 12 points [-]

The assumptions behind Cox's theorem are:

  1. Representation of degrees of plausibility by real numbers
  2. Qualitative correspondence with common sense
  3. Consistency

Would you please clearly state which of these you disagree with, and why? And if you disagree with (1), is it because you don't think degrees of plausibility should be represented, or because you think they should be represented by something other than real numbers, and if so, then what? (Please do not give an answer that cannot be defined precisely by mapping it to a mathematical set. And please do not suggest a representation that is obviously inadequate, such as booleans.)

Comment author: curi 07 April 2011 03:00:06AM 1 point [-]

Could you explain what you're talking about a bit more? For example you state "consistency" as an assumption. What are you assuming is (should be?) consistent with what?

Comment author: JoshuaZ 07 April 2011 03:25:19AM 11 points [-]

You may have valid points to make but it might help in getting people to listen to you if you don't exhibit apparent double standards. In particular, your main criticism seems to be that people aren't reading Popper's texts and related texts enough. Yet, at the same time, you are apparently unaware of the basic philosophical arguments for Bayesianism. This doesn't reduce the validity of anything you have to say but as an issue of trying to get people to listen, it isn't going to work well with fallible humans.

Comment author: jimrandomh 07 April 2011 03:18:22AM *  4 points [-]

Cox's theorem is a proof of Bayes rule, from the conditions above. "Consistency" in t his context means (Jaynes 19): If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result; we always take into account all of the evidence we have relevant to a question; and we always represent equivalent states of knowledge by equivalent plausibility assignments. By "reason in more than one way", we specifically mean adding the same pieces of evidence in different orders.

(Edit: It's page 114 in the PDF you linked. That seems to be the same text as my printed copy, but with the numbering starting in a different place for some reason.)

Comment author: Larks 08 April 2011 01:10:11AM 3 points [-]

If only Jaynes had clearly listed them on page 114!

Comment author: JoshuaZ 07 April 2011 01:00:33AM 5 points [-]

There's an associated problem here that may be getting ignored: Popper isn't a terribly good writer." The Logic of Scientific Discovery" was one of the first phil-sci books I ever read and it almost turned me off of phil-sci. This is in contrast for example with Lakatos or Kuhn who are very readable. Some of the difficulty with reading Popper and understanding his viewpoints is that he's just tough to read.

That said, I think that chapter 3 of that books makes clear that Popper's notion of falsification is more subtle than what I would call "naive Popperism". But Popper never fully gave an explanation of how to distinguish between strict falsification theory and his notions.

There's an associated important issue: many people claim to support naive Popperism as an epistemological position, either as a demarcation between science and non-science or as a general epistemological approach. In so far as both are somewhat popular viewpoints (especially among scientists) responding to and explaining what is wrong with that approach is important even as one should acknowledge that Popper's own views were arguably more nuanced.

Comment author: curi 07 April 2011 03:03:25AM 0 points [-]

I do not find Popper hard to read.

Popper never fully gave an explanation of how to distinguish between strict falsification theory and his notions.

Did you read his later books? He does explain his position. One distinguishing difference is that Popper is not a justificationist and they are. Tell me if you don't know what that means.

Comment author: [deleted] 07 April 2011 12:25:14AM *  4 points [-]

The naturalist philosopher Peter Godfrey Smith said this of Popper's position:

[F]or Popper, it is never possible to confirm or establish a theory by showing its agreement with observations. Confirmation is a myth. The only thing an observational test can do is to show that a theory is false...Popper, like Hume, was an inductive skeptic, and Popper was skeptical about all forms of confirmation and support other than deductive logic itself...This position, that we can never be completely certain about factual issues, is often known as fallibilism...According to Popper, we should always retain a tentative attitude towards our theories, no matter how successful they have been in the past...[a]ll we can do is try out one theory after another. A theory that we have failed to falsify up till now might, in fact, be true. But if so, we will never know this or even have reason to increase our confidence.

(From Theory and Reality, p. 59-61.) Is this not an accurate description? You seem to think Popper didn't believe in definitive falsification, but this doesn't seem to be a universally accepted interpretation. Note also that Godfrey-Smith does refer to Popper's position as fallibilism, so he is not being "unscholarly." Though Popper may have held the position that falsification can't be perfectly certain, he definitely didn't take this idea too seriously because his description of science as a process (step one: come up with conjectures; step two: falsify them) makes use of falsification by experiment.

I think the answer to your overarching question can be found here. If we know that certain events are more probable given that certain other events happened, i.e. conditional probability, we can make inferences about the future.

Comment author: curi 07 April 2011 12:44:42AM *  3 points [-]

Is this not an accurate description?

No. To start with, it's extremely incomplete. It doesn't really discuss what Popper's position is. It just makes a few scattered statements which do not explain what Popper is about.

The word "show" is ambiguous in the phrase "show that a theory is false". To a Popperian, equivocation over the issue of what is meant there is an important issue. It's ambiguous between "show definitively" and "show fallibly".

The idea that we can show a theory is false by an experimental test (even fallibly) is also, strictly, false, as Popper explained in LScD. When you reach a contradiction, something in the whole system is false. It could be an idea you had about how to measure what you wanted to measure. There's many possibilities.

You seem to think Popper didn't believe in definitive falsification, but this doesn't seem to be a universally accepted interpretation.

It's right there in LScD on page 56. I think it's in most of his other books too. I am familiar with the field and know of no competent Popper scholars who say otherwise.

Anyone publishing to the contrary is simply incompetent, or believed low quality secondary sources without fact checking them.

Though Popper may have held the position that falsification can't be perfectly certain, he definitely didn't take this idea too seriously because his description of science as a process (step one: come up with conjectures; step two: falsify them) makes use of falsification by experiment.

You have misinterpreted when you took "falsify them" to mean "falsify them with certainty". Popper is a fallibilist.

If we know that certain events are more probable given that certain other events happened

This does not even attempt to address important problems in epistemology such as how explanatory or philosophical knowledge is created.

Comment author: [deleted] 07 April 2011 01:07:57AM *  3 points [-]

I'll agree that Godfrey-Smith's definition is incomplete, but I don't think it really matters for the purpose of this discussion: I've already said I agree that Popper did not believe in certain confirmation, and this seems to be your main problem with this quote and with the ones other people gave. You wrote:

You have misinterpreted when you took "falsify them" to mean "falsify them with certainty". Popper is a fallibilist.

No, that is not what I meant at all. What I meant was, Popper was content with the fact that experimental evidence can say that something is probably false. If he wasn't, he wouldn't have included this his view of science as a process. So even though Popper was a falibilist, he thought that when an experimental result argued against a hypothesis, it was good enough for science.

Next:

The idea that we can show a theory is false by an experimental test (even fallibly) is also, strictly, false, as Popper explained in LScD. When you reach a contradiction, something in the whole system is false. It could be an idea you had about how to measure what you wanted to measure. There's many possibilities.

Yes, this is the old "underdetermination of theory by data" problem, which Solomonoff Induction solves--see the coinflipping example here.

Moving on, you wrote:

This does not even attempt to address important problems in epistemology such as how explanatory or philosophical knowledge is created.

Would you mind elaborating on this? What specific problems are you referring to here?

Comment author: curi 07 April 2011 01:37:39AM 3 points [-]

Popper was content with the fact that experimental evidence can say that something is probably false

That is not Popper's position. That is not even close. In various passages he explicitly denies it like "not certain or probable". To Popper, the claims that the evidence tells us something is certainly true, or probably true, are cousins which share an underlying mistake. You're assuming Popper would agree with you about probability without reading any of his passages on probability in which he, well, doesn't.

Arguing what books say with people who haven't read them gets old fast. So how about you just imagine a hypothetical person who had the views I attribute to Popper and discuss that?

Would you mind elaborating on this? What specific problems are you referring to here?

For example, the answers to all questions that have a "why" in them. E.g. why is the Earth roughly spherical? Statements with "because" (sometimes implied) is a pretty accurate way to find explanations, e.g. "because gravity is a symmetrical force in all directions". Another example is all of moral philosophy. Another example is epistemology itself, which is a philosophy not an empirical field.

Yes, this is the old "underdetermination of theory by data" problem

Yes

Which Solomonoff Induction solves--see the coinflipping example here.

This does not solve the problem to my satisfaction. It orders theories which make identical predictions (about all our data, but not about the unknown) and then lets you differentiate by that order. But isn't that ordering arbitrary? It's just not true that short and simple theories are always best; sometimes the truth is complicated.

Comment author: jimrandomh 07 April 2011 01:48:41AM 3 points [-]

For example, the answers to all questions that have a "why" in them. E.g. why is the Earth roughly spherical? Statements with "because" (sometimes implied) is a pretty accurate way to find explanations, e.g. "because gravity is a symmetrical force in all directions". Another example is all of moral philosophy. Another example is epistemology itself, which is a philosophy not an empirical field.

For a formal mathematical discussion of these sorts of problems, read Causality by Judea Pearl. He reduces cause to a combination of conditional independence and ordering, and from this he defines algorithms for discovering causal models from data, predicting the effect of interventions and computing counterfactuals.

Comment author: curi 07 April 2011 01:51:03AM *  1 point [-]

Could you give a short statement of the main ideas? How can morality be reduced to math? Or could you say something to persuade me that that book will address the issues in a way I won't think misses the point? (e.g. by showing you understand what I think the point is, otherwise I won't except you to be able to judge if it misses the point in the way I would).

Comment author: jimrandomh 07 April 2011 02:01:00AM 2 points [-]

Sorry, I over-quoted there; Pearl only discusses causality, and a little bit of epistemology, but he doesn't talk about moral philosophy at all.

His book is all about causal models, which are directed graphs in which each vertex represents a variable and each edge represents a conditional dependence between variables. He shows that the properties of these graphs reproduce what we intuitively think of as "cause and effect", defines algorithms for building them from data and operating on them, and analyzes the circumstances under which causality can and can't be inferred from the data.

Comment author: curi 07 April 2011 02:28:44AM 2 points [-]

I don't understand the relevance.

Comment author: jimrandomh 07 April 2011 02:39:41AM 2 points [-]

Your quote seemed to be saying that that Bayesianism couldn't handle why/because questions, but Popperian philosophy could. I mentioned Pearl as a treatment of that class of question from a Bayes-compatible perspective.

Comment author: curi 07 April 2011 02:54:20AM 1 point [-]

Causality isn't explanation. X caused Y isn't the issue I was talking about.

For example, the statement "Murder is bad because it is illiberal" is an explanation of why it is bad. It is not a statement about causality.

You may say that "illiberal" is a short cut for various other ideas. And you may claim that eventually that reduce away to causal issues. But that would be reductionism. We do not accept that high level concepts are a mistake or that emergence isn't important.

Comment author: [deleted] 07 April 2011 01:58:09AM *  -1 points [-]

Actually, one of the reason I stood by this interpretation of Popper was because one of the quotes posted in one of the other threads here:

"the falsificationists or fallibilists say, roughly speaking, that what cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed"

Which is apparently from Conjectures and Refutations, pg 309. Regardless, I don't care about this argument overmuch, since we seem to have moved on to some other points.

[Solomonoff Induction] does not solve the problem to my satisfaction. It orders theories which make identical predictions (about all our data, but not about the unknown) and then lets you differentiate by that order. But isn't that ordering arbitrary? It's just not true that short and simple theories are always best; sometimes the truth is complicated.

Remember that in Bayesian epistemology, probabilities represent our state of knowledge, so as you pointed out, the simplest hypothesis that fits the data so far may not be the true one because we haven't seen all of the data. But it is necessarily our best guess because of the conjunction rule.

Comment author: JoshuaZ 07 April 2011 02:42:59AM 1 point [-]

Remember that in Bayesian epistemology, probabilities represent our state of knowledge, so as you pointed out, the simplest hypothesis that fits the data so far may not be the true one because we haven't seen all of the data. But it is necessarily our best guess because of the conjunction rule.

You are going to have to expand on this. I don't see how the conjunction rule implies that simpler hypotheses are in general more probable. This is true if we have two hypotheses where one is X and the other is "X and Y" but that's not how people generally apply this sort of thing. For example, I might have a sequence of numbers that for the first 10,000 terms has the nth term as the nth prime number. One hypothesis is that the nth term is always the nth prime number. But I could have as another hypothesis some high degree polynomial that matches the first 10,000 primes. That's clearly more complicated. But one can't use conjunction to argue that it is less likely.

Comment author: [deleted] 07 April 2011 04:52:44AM *  1 point [-]

Imagine that I have some set of propositions, A through Z, and I don't know the probabilities of any of these. Now let's say I'm using these propositions to explain some experimental result--since I would have uniform priors for A through Z, it follows that an explanation like "M did it" is more probable than "A and B did it," which in turn is more probable than "G and P and H did it."

Comment author: JoshuaZ 07 April 2011 04:58:22AM 1 point [-]

Yes, I agree with you there. But this is much weaker than any general form of Occam. See my example with primes. What we want to say in some form of Occam approach is much stronger than what you can get from simply using the conjunction argument.

Comment author: curi 07 April 2011 02:22:42AM *  1 point [-]

There are so many problems here that it's hard to choose a starting point.

1) the data set you are using is biased (it is selective. all observation is selective)

2) there is no such thing as "raw data" -- all your observations are interpreted. your interpretations may be mistaken.

3) what do you mean by "best guess"? one meaning is "most likely to be the final, perfect truth". but a different meaning is "most useful now".

4) You say "probabilities represent our state of knowledge". However there are infinitely many theories with the same probability. Or there would be, except for your solomonoff prior about simpler theories having higher probability. So the important part of "state of our knowledge" as represented by these probabilities consists mostly of the solomonoff prior and nothing else, because it, and it alone, is dealing with the hard problem of epistemology (dealing with theories which make identical predictions about everything we have data for).

5) you can have infinite data and still get all non-emprical issues wrong

6) regarding the conjunction rule, there is miscommunication. this does not address the point i was trying to make. i think you have a premise like "all more complicated theories are merely conjunctions of simpler theories". But that is to conceive of theories very differently than Popperians do, in what we see as a limited and narrow way. To begin to address these issues, let's consider what's better: a bald assertion, or an assertion and an explanation of why it is correct? If you want "most likely to happen to be the perfect, final truth" you are better off with only an unargued assertion (since any argument may be mistaken). But if you want to learn about the world, you are better off not relying on unargued assertions.

Comment author: falenas108 07 April 2011 12:31:38AM -1 points [-]

Sorry, didn't see you posted this before I replied too...

Comment author: [deleted] 07 April 2011 12:34:48AM 0 points [-]

Actually, I'm glad you replied as well--the more quotes about/by Popper that we unearth, the more accurate we will be.

Comment author: Peterdjones 18 July 2011 12:03:38AM *  2 points [-]

If anyone can bear more of this, Poppers argument against induction using Bayes is being discussed here

Comment author: endoself 07 April 2011 12:05:45AM 2 points [-]

The thing intended as the proof is most of chapter 2. I dislike Jaynes' assumptions there, since I find many of them superfluous compared to other proofs. You probably like them even less, since one is "Representation of degrees of plausibility by real numbers".

Comment author: curi 07 April 2011 12:09:20AM *  2 points [-]

It cannot be a proof of Bayesian epistemology itself if it makes assumptions like that.

It is merely a proof of some theorems in Bayesian epistemology given some premises that Bayesians like.

If you have a different proof which does not make assumptions I disagree with, then let's hear it. Otherwise you can give up on proving and start arguing why I should agree with your starting points. Or maybe even, say, engaging with Popper's arguments and pointing out mistakes in them (if you can find any).

Comment author: Peterdjones 12 April 2011 08:31:43PM 1 point [-]

You are complaining it is not a deduction of Bayes from no assumpyions whatever. But all it needs to be is that those assumptions can be made to "work"--ie applied without con tradiction, qoudliber or other disaster.

Comment author: Peterdjones 15 April 2011 03:18:53PM 0 points [-]

Remember, Popper himself said it all starts with common sense.

Comment author: endoself 07 April 2011 02:42:12AM -1 points [-]

I agree that it is by no means a complete proof of Bayesian epistemology. The book I pointed you to might have a more complete one, though I doubt it will be complete since it seems more like a book about using statistics than about rigourously understanding epistemology.

I am currently collecting the necessary knowledge to write the full proof myself, if it is possible (not because of this debate, because I kept being annoyed by unjustified assumptions that didn't even seem necessary).

Comment author: curi 07 April 2011 02:55:59AM 0 points [-]

Good luck. But, umm, do you have some argument against fallibilism? Because you're going to need one.

Comment author: endoself 07 April 2011 03:35:59AM *  1 point [-]

I think I massively overstated my intention. I meant the full proof of the stuff we know; the thing I think could be in Mathematical Statistics, Volume 1: Basic and Selected Topics.

Anyways, I think I accept fallibilism, at least from the Wikipedia page. Why do you think I don't? This is understandable, because I've been talking about idealized agents a lot more than about humans actually applying Bayesianism.

Comment author: curi 07 April 2011 03:48:40AM 1 point [-]

I think you are not a fallibilist because you want to prove philosophical ideas.

But we can't have certainty. So what do you even think it means to "prove" them? Why do you want to prove them instead of give good arguments on the matter?

Comment author: endoself 07 April 2011 04:15:22AM *  0 points [-]

I use the word prove because I'm doing it deductively in math. I already linked you to the 2+2=3 thing, I believe. Also, the question of how I would, for example, change AI design if a well-known theorem is wrong (pretend it is the future and the best theorems proving Bayesianism are better-known and I am working on AI design) is both extremely hard to answer and unlikely to be necessary. Well unlikely is the wrong word; what is P(X | "There are no probabilities")? :)

Comment author: calef 07 April 2011 05:10:07AM *  1 point [-]

Probably the most damning criticism you'll find, curi, is that fallibilism isn't useful to the Bayesian.

The fundamental disagreement here is somewhere in the following statement:

"There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true."

I suspect your disagreement is in one of several places.

1) You disagree that there even exist epistemically "true" facts. 2) That we can determine how likely something is to be true. or 3) That likelihood of being true (as defined by us) is reason to believe the truth of something.

I can actually flesh out your objections to all of these things.

For 1, you could probably successfully argue that we aren't capable of determining if we've ever actually arrived at a true epistemic statement because real certainty doesn't exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God--i.e. shaky to the point of not concerning oneself with them all together.

2 basically ties in with the above directly.

3 is a whole 'nother ball game, and I don't think it's really been broached yet by anyone, but it's certainly a valid point of contention. I'll leave it out unless you'd like to pursue it.

The Bayesian counter to all of these is simply, "That doesn't really do anything for me."

Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment.

That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren't-falling-when-dropped paradigm would literally (and necessarily) uproot everything else we know about the world.

Basically, what I'm trying to say is that all you're ever going to get out of a Bayesian is, "No, I disagree. I think we can have certainty." And the only way you could disprove conclusions made by Bayesians are through means the Bayesian would have already seen, and thus the Bayesian would have already rejected said conclusion.

You've already outlined that the fallibilist will just keep tweaking explanations until an explanation with no criticism is reached. I think you might find Bayesianism more palatable if you just pretend that we aren't trying to find certainty, just say we're trying to minimize criticism.

This probably hasn't been a very satisfying answer. I certainly agree it's useful to have an understanding of the biases to our certainties. I also think Bayesianism happens to build that into itself quite well. Personally, I don't think there's anything I'm absolutely certain about, because to claim so would be silly.

Comment author: endoself 07 April 2011 05:32:57AM 1 point [-]

Small nitpick: I don't like your use of the word 'certainty' here. Especially in philosophy, it has too much of a connotation of "literally impossible for me to be wrong" rather than "so ridiculously unlikely that I'm wrong that we can just ignore it", which may cause confusion.

Comment author: calef 07 April 2011 05:40:16AM 0 points [-]

Where don't you like it? I don't think anyone actually argues for your first definition, because, like I said, it's silly. I think curi's point is that fallibilism is predicated on your second definition not (ever?) being a valid claim.

My point is that the things we are "certain" about (as per your second definition) probably coincide almost exactly with "statements without criticism" as per curi's definition(s).

Comment author: paulfchristiano 07 April 2011 12:19:41AM *  5 points [-]

I don't understand Popper's work beyond the Wikipedia summary of critical rationalism. That summary, as well as the debate here at LW, appear to be confused and essentially without value. If this is not the case, you should update this post to include not just a description of how supporters of Bayesianism don't understand Popper, but why they should care about this discussion--why Bayesianism is not, as it seems, obviously the correct answer to the question Popper is trying to answer.

If you want to make bets about the future, Bayesianism will beat whatever else you could use. To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts, if you define Bayesianism and "winning" precisely.

It seems like the only possible room for debate is the choice of prior. Everyone is forced to either implicitly choose a prior or else bet in a way that is manifestly irrational. This is also a mathematical fact. The Solomonoff prior provably isn't too bad. You just have to get over the arbitrariness.

Edit: Lets make this more precise. I claim that if we play a betting game, I can reconstruct a prior from your strategy such that a Bayesian using that prior will beat you in expectation. Do you object to this mathematical statement, or do you object to the interpretation of this fact as "Bayesianism is correct"? I'm not sure which side of the fence you are on, but I suppose it must be one or the other, so if we get that sorted out maybe we can make progress.

Comment author: curi 07 April 2011 12:30:32AM 1 point [-]

I don't understand Popper's work beyond the Wikipedia summary of critical rationalism

FYI that won't work. Wikipedia doesn't understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages).

without value

I posted criticisms of Jaynes' arguments (or more accurately, his assumptions). I posted an argument about support. Why don't you answer it?

You just have to get over the arbitrariness.

You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it!

Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books.

If you want somewhere to start online, you could read

http://fallibleideas.com/

If you want to make bets about the future

That is not primarily what we want. And what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).

To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts,

Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or:

It seems like the only possible room for debate is the choice of prior.

Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn't have, such as not having arbitrary foundations.

What you're doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that's bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes's theorem at all).

Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right? And if no one ever has, then you should be interested in investigating, right? And also interested in investigating what is wrong with your movement that it never addressed rival ideas in scholarly debate. (I have looked for such a criticism. Never managed to find one.)

Comment author: Peterdjones 15 April 2011 03:14:18PM 3 points [-]

Why don't you fix the WP article?

Comment author: paulfchristiano 07 April 2011 01:02:58AM 9 points [-]

Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right?

Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don't have to spend time thinking about it, because it is solved. I would not generally criticize a rival's ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.

Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say?

Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn't object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).

The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.

Really this entire discussion comes down to what we want out of epistemology.

That [guiding betting] is not primarily what we want.

What do you want? I don't understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:

I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.

Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?

You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper's ideas aren't nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?

If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.

Comment author: curi 07 April 2011 02:07:06AM 1 point [-]

Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably.

You don't think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).

But you haven't provided any argument that Popper in particular was confused, irrefutable, or whatever. I don't know about you, but as someone who wants to improve my epistemological knowledge I think it's important to consider all the major ideas in the field at the very least enough to know one good criticism of each.

Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you're done with thinking, you have the final truth, and that's that..?

The only sort of argument which warrants response is an objection to my current definitive answer.

Popper published several of those. Where's the response from Bayesians?

One thing to note is it's hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)

The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested

Popper solved that problem.

I am fairly confident the problem has no solution for what are essentially obvious reasons

The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called "justificationism" by Popperians, and are criticized in detail. I think you ought to be interested in this.

One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn't exist automatically but is being created by your own assumptions.

Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.

What are you talking about? You haven't read his books and claim he didn't give enough detail? He was something of a workaholic who didn't watch TV, didn't have a big social life, and worked and wrote all the time.

What do you want?

To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I'm not assuming you actually are a positivist, but I'm not really seeing much difference yet.)

To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world.

To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.

Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?

Neither. You can and should do better!

Comment author: David_Allen 07 April 2011 04:16:38PM 0 points [-]

To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.

Given well defined contexts and meanings for good and bad I don't see why Bayesianism could not be effectively applied to to moral problems.

Comment author: curi 07 April 2011 06:40:28PM 0 points [-]

Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.

You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).

Comment author: JoshuaZ 07 April 2011 06:58:15PM *  2 points [-]

You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).

You've repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you've shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let's work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.

Comment author: curi 07 April 2011 07:00:42PM *  -2 points [-]

http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7

And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.

Comment author: JoshuaZ 07 April 2011 07:10:36PM 0 points [-]

And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.

Consider that you are replying to a statement I just said that all you've done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?

Comment author: David_Allen 07 April 2011 08:26:10PM 0 points [-]

Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.

First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.

Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.

This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.

You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).

These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.

Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.

Comment author: curi 07 April 2011 08:41:46PM 0 points [-]

First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.

Sorry. I have no idea who is who. Don't mind me.

This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.

The Popperian method is universal.

if Bayesianism is in some sense Turing complete then it can be used to do all of this

Well, umm, yes but that's no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don't know how to make it do that stuff. Epistemology should help us.

Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.

Example or details?

Comment author: David_Allen 07 April 2011 09:59:13PM 0 points [-]

Sorry. I have no idea who is who. Don't mind me.

No problem, I'm just pointing out that there are other perspectives out here.

The Popperian method is universal.

Sure, in the sense it is Turing complete; but that doesn't make it the most efficient approach for all cases. For example I'm not going to use it to decide the answer to the statement "2 + 3", it is much more efficient for me to use the arithmetic abstraction.

But we don't know how to make it do that stuff. Epistemology should help us.

Agreed, it is one of the reasons that I am actively working on epistemology.

Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.

Example or details?

The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as "good" or "bad" based on example input.

Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.

Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"

Comment author: curi 08 April 2011 12:37:39AM 0 points [-]

Sure, in the sense it is Turing complete;

No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality).

Not in the sense that it's Turing complete so you could, by a roundabout way and using whatever methods, do anything.

I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example:

With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"

You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.

Comment author: paulfchristiano 07 April 2011 01:12:55AM *  2 points [-]

Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone's particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected.

In particular, if you ever try to make a claim of the form "You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief" then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision.

Edit: changed wording to be less of an ass.

Comment author: curi 07 April 2011 01:24:01AM 2 points [-]

In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too?

Given that I do make claims on my website, I wonder why you don't pick one and point out something you think is wrong with it.

Comment author: paulfchristiano 07 April 2011 01:33:16AM *  2 points [-]

Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)

I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.

If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments--its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don't think this is necessarily a flaw with your website--presumably it was not designed first and foremost as a response to Bayesianism--but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.

To be clear, what I am looking for is a statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage." Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been "wrong" where Popper would be clearly "right" at any historical point would be good enough to argue about.

Comment author: curi 07 April 2011 01:47:06AM *  0 points [-]

statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage I would not believe such a thing because of my improved epistemology.

Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.

I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.

Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)

Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.

Comment author: Desrtopa 07 April 2011 01:58:08AM 4 points [-]

Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.

It doesn't take Popperian epistemology to learn social fluency. I've learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.

If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.

Comment author: JoshuaZ 07 April 2011 02:07:05AM 2 points [-]

If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.

That's a claim that only makes sense in certain epistemological systems...

Comment author: curi 07 April 2011 02:09:13AM *  3 points [-]

I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.

Comment author: JoshuaZ 07 April 2011 02:36:12AM 0 points [-]

I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.

Hmm? I'm not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That's not to say I understand how they think it is implied/ok in a Popperian framework).

Comment author: paulfchristiano 07 April 2011 02:09:39AM 3 points [-]

Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference?

No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.

Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can't listen to an mp3 right now.

My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren't the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn't? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?

I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.

I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don't yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?

This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.

I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.

Comment author: curi 07 April 2011 02:46:51AM 0 points [-]

Could you explain how a Popperian disputes such an assertion? [(50% probability of humanity surviving the next century)]

e.g. by pointing out that whether we do or don't survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can't make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.

You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There's no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?

You seem to be arguing that Bayesianism is wrong, which is a very different thing.

I think it's wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes' theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don't want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.

Would you never take a bet?

Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it's not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?

Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?

We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.

This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.

Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).

Comment author: jake987722 07 April 2011 03:24:11AM 6 points [-]

So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?

Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you've described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.

First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don't actually make a decision and the problem remains unsolved.

Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That's fine, but then it's not clear that you've done anything different from what a Bayesian would have done--you've simply avoided explicitly talking about things like probabilities and priors.

Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?

Comment author: curi 07 April 2011 03:59:34AM 2 points [-]

When you have exactly one non-refuted theory, you go with that.

The other cases are more complicated and difficult to understand.

Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?

If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?

If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.

[option 1] since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other.

But some are criticized and some aren't.

[option 2] conjecture that best weathered the criticisms you were able to muster

But how is that to be judged?

No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).

Or is it something radically different from these two altogether?

Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.

Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.

Comment author: Larks 08 April 2011 01:03:10AM 0 points [-]

And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge.

You're equivocating between "knowing exactly the contents of the new knowledge", which may be impossible for the reason you describe, and "know some things about the effect of the new knowledge", which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.

Comment author: timtyler 07 April 2011 12:48:52PM *  1 point [-]

what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).

That's because to a Bayesian, these things are the same thing. Epistemology is all about probability - and visa versa. Bayes's theorem includes induction and confirmation. You can't accept Bayes's theorem and reject induction without crazy inconsistency - and Bayes's theorem is just the math of probability theory.

Comment author: [deleted] 07 April 2011 01:05:19PM 0 points [-]

If I understand correctly, I think curi is saying that there's no reason for probability and epistemology to be the same thing. That said, I don't entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these "epistemological problems" that Popper solves but Bayes doesn't?

Comment author: JGWeissman 07 April 2011 06:19:50AM 4 points [-]

A huge strength of Bayesian epistemology is that it tells me how to program computers to form accurate beliefs. Has Popperian epistemology guided the development of any computer program as awesome as Gmail's spam filter?

Comment author: curi 07 April 2011 06:59:03AM -2 points [-]

Bayesian epistemology didn't do that. Bayes' theorem did. See the difference?

Comment author: JGWeissman 07 April 2011 04:40:10PM 3 points [-]

Bayesian epistemology didn't do that. Bayes' theorem did.

Bayes' theorem is part of probability theory. Bayesian epistemology essentially says to take probability theory seriously as a normative description of degrees of belief.

If you don't buy that and really want to split the hair, then I am willing to modify my question to: Has the math behind Popperian epistemology guided the development of any computer program as awesome as Gmail's spam filter? (Is there math behind Popperian epistemology?)

Comment author: curi 07 April 2011 05:59:18PM -1 points [-]

gmail's spam filter does not have degrees of belief or belief.

It has things which you could call by those words if you really wanted to. But it wouldn't make them into the same things those words mean when referring to people.

Comment author: JGWeissman 07 April 2011 06:14:51PM 4 points [-]

But it wouldn't make them into the same things those words mean when referring to people.

I want the program to find the correct belief, and then take good actions based on that correct belief. I don't care if lacks the conscious experience of believing.

You are disputing definitions and ignoring my actual question. Your next reply should answer the question, or admit that you do not know of an answer.

Comment author: Alicorn 07 April 2011 06:12:49PM 4 points [-]

gmail's spam filter does not have degrees of belief or belief.

It has things which you could call by those words if you really wanted to. But it wouldn't make them into the same things those words mean when referring to people.

Augh, this reminded me of a quote that I can't seem to find based on my tentative memory of its component words... it was something to the effect that we anthropomorphize computers and talk about them "knowing" things or "communicating" with each other, and some people think that's wrong and they don't really do those things, and the quote-ee was of the opinion that computers were clarifying what we meant by those concepts all along. Anybody know what I'm talking about?

Comment author: curi 07 April 2011 06:37:10PM 1 point [-]

To be clear, I think computers can do those things and AIs will, and that will help clarify the concepts a lot.

But I don't think that microsoft word does it. Nor any game "AI" today. Nor gmail's spam filter which just does mindlessly math.

Comment author: Peterdjones 12 April 2011 08:23:30PM *  2 points [-]

"'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me."

Bayesians appear to have answers to these questions. Moreovoer, far from wishing to refute Popper, they can actually incorporate a fomr of Popperianism.

"On the other hand, Popper's idea that there is only falsification and no such thing as confirmation turns out to be incorrect. Bayes' Theorem shows that falsification is very strong evidence compared to confirmation, but falsification is still probabilistic in nature; it is not governed by fundamentally different rules from confirmation, as Popper argued."

But of course Popper was a falliblist as well as a falsificationist, so his falsifications aren't absolute and certain anyway. Bayes just brings out that where you don't have absolute falsification, you can't have absolute lack of positive support. Falsification of T has to support not-T. But the support gets spread thinly...

Comment author: Peterdjones 12 April 2011 08:11:16PM 2 points [-]

Curi,

"Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false."

Popper's epistemology doesn't explain that the conclusion of the argument has no validty, in the sense of being certainly false. In fact, it requires that the conclusion is not certainly false. No conjecture is certainly false.

Perhaps you meant he shows that the argument is invalid in the sense of being a non sequitur. (A non sequitur can still have a plausible or true conclusion). Of course it is not valid in the sense of traditional, necessitarian deduction. The whole point is that it is something different. And the argument that this non-traditional, plausibility based deduction works is just the informal observation that we use it all the time and it seems to work. What else could it be? If were valid by taditional deduction it would BE traditional deduction.

" Later when he gets into more mathematical stuff which doesn't (directly) rest on appeals to intution, it does rest on the ideas he (supposedly) established early on with his appeals to intuition."

The Popperian argument against probablistic reasoning is that it can't be shown how it works. If Jaynes maths shows how it works, that objection is removed.

"This is pure fiction. Popper is a fallibilist and said (repeatedly) that theories cannot be proved false (or anything else)."

Of course he has to believe in some FAPP refutation. or he ends up saying nothing at all.

Comment author: Peterdjones 14 April 2011 02:27:20PM 3 points [-]

"Science, philosophy and rational thought must all start from common sense". KRP, Objective Knowledge, p33.

Starting with common sense is exactly what Jaynes is doing. (Popper says that what is important is not to take common sense as irrefutable).

Comment author: David_Gerard 07 April 2011 12:15:23PM *  2 points [-]

It has occurred to me before that the lack of a proper explanation on LessWrong of Bayesian epistemology (and not just saying` "Here's Bayes' theorem and how it works, with a neat Java applet") is a serious lack. I've been reduced to linking the Stanford Encyclopedia of Philosophy article, which is really not well written at all.

It is also clear from the comments on this post that people are talking about it without citable sources, and are downvoting as a mark of disagreement rather than anything else. This is bad as it directly discourages thought or engagement on the topic from those trying to disagree in good faith, as curi is here.

Is there a decent explanation of Bayesian epistemology per se (not the theorem, the epistemology) that doesn't start by talking about Popper or something else, that the Bayesian epistemology advocates here could link to? This would lead to a much more productive discussion, as everyone might at least start on approximately the same page.

Comment author: benelliott 07 April 2011 12:40:43PM *  0 points [-]

I don't know if these are what you're looking for but:

Probability Theory: The Logic of Science by Jaynes, spends its first chapter explaining why we need a 'calculus of plausibility' and what such a calculus should hope to achieve. The rest of the book is mostly about setting it up and showing what it can do. (The link does not contain the whole book, only the first few chapters, you may need to buy or borrow it to get the rest).

Yudkowsky's Technical explanation, which assumes the reader is already familiar with the theorem, explains some of its implications for scientific thinking in general.

Comment author: David_Gerard 07 April 2011 01:29:13PM 1 point [-]

See here for what I see the absence of. There's a hole that needs filling here.

Comment author: falenas108 07 April 2011 12:28:06AM *  -1 points [-]

From the research I have done in the last 5 minutes, it seems as though Popper believed that all good scientific theories should be subject to experiments that could prove them wrong.
Ex:

"the falsificationists or fallibilists say, roughly speaking, that what cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed" -Popper

This seems to imply that theories can be proved false.