I was directed to this book (http://www-biba.inrialpes.fr/Jaynes/prob.html) in conversation here:

http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3ug7?context=1#3ug7

I was told it had a proof of Bayesian epistemology in the first two chapters. One of the things we were discussing is Popper's epistemology.

Here are those chapters:

http://www-biba.inrialpes.fr/Jaynes/cc01p.pdf

http://www-biba.inrialpes.fr/Jaynes/cc02m.pdf

I have not found any proof here that Bayesian epistemology is correct. There is not even an attempt to prove it. Various things are assumed in the first chapter. In the second chapter, some things are proven given those assumptions.

Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false. I agree so far. Next it says "we will grant that it had a certain degree of validity". But I will not grant that. Popper's epistemology explains that *this is a mistake* (and Jaynes makes no attempt at all to address Popper's arguments). In any case, simply assuming his readers will grant his substantive claims is no way to argue.

The next sentences blithely assert that we all reason in this way. Jaynes' is basically presenting the issues of this kind of reasoning as his topic. This simply ignores Popper and makes no attempt to prove Jaynes' approach is correct.

Jaynes goes on to give syllogisms, which he calls "weaker" than deduction, which he acknowledges are not deductively correct. And then he just says we use that kind of reasoning all the time. That sort of assertion only appeals to the already converted. Jaynes starts with arguments which appeal to the *intuition* of his readers, not on arguments which could persuade someone who disagreed with him (that is, good rational arguments). Later when he gets into more mathematical stuff which doesn't (directly) rest on appeals to intution, it does rest on the ideas he (supposedly) established early on with his appeals to intuition.

The outline of the approach here is to quickly gloss over substantive philosophical assumptions, never provide serious arguments for them, take them as common sense, do not detail them, and then later provide arguments which are rigorous *given the assumptions glossed over earlier*. This is a mistake.

So we get, e.g., a section on Boolean Algebra which says it will state previous ideas more formally. This briefly acknowledges that the rigorous parts depend on the non-rigorous parts. Also the very important problem of carefully detailing how the mathematical objects discussed correspond to the real world things they are supposed to help us understand does not receive adequate attention.

Chapter 2 begins by saying we've now formulated our problem and the rest is just math. What I take from that is that the early assumptions won't be revisted but simply used as premises. So the rest is pointless if those early assumptions are mistaken, and Bayesian Epistemology cannot be proven in this way to anyone who doesn't grant the assumptions (such as a Popperian).

Moving on to Popper, Jaynes is ignorant of the topic and unscholarly. He writes:

http://www-biba.inrialpes.fr/Jaynes/crefsv.pdf

> Karl Popper is famous mostly through making a career out of the doctrine that theories may not be proved true, only false

This is pure fiction. Popper is a fallibilist and said (repeatedly) that theories cannot be proved false (or anything else).

It's important to criticize unscholarly books promoting myths about rival philosophers rather than addressing their actual arguments. That's a major flaw not just in a particular paragraph but in the author's way of thinking. It's especially relevant in this case since the author of the books tries to tell us about how to think.

Note that Yudkowsky made a similar unscholarly mistake, about the same rival philosopher, here:

http://yudkowsky.net/rational/bayes

> Previously, the most popular philosophy of science was probably Karl Popper's falsificationism - this is the old philosophy that the Bayesian revolution is currently dethroning.  Karl Popper's idea that theories can be definitely falsified, but never definitely confirmed

Popper's philosophy is not falsificationism, it was never the most popular, and it is fallibilist: it says ideas cannot be definitely falsified. It's bad to make this kind of mistake about what a rival's basic claims are when claiming to be dethroning him. The correct method of dethroning a rival philosophy involves understanding what it does say and criticizing that.

If Bayesians wish to challenge Popper they should learn his ideas and address his arguments. For example he questioned the concept of positive support for ideas. Part of this argument involves asking the questions: 'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me.

New to LessWrong?

New Comment
228 comments, sorted by Click to highlight new comments since: Today at 10:27 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have skimmed through the comments here and smelled a weak odour of a flame war. Well, the discussion is still rather civil and far from a flame war as understood on most internet forums, but it somehow doesn't fit well within what I am used to see here on LW.

The main problem I have is that you (i.e. curi) have repeatedly asserted that the Bayesians, including most of LW users, don't understand Popperianism and that Bayesianism is in fact worse, without properly explaining your position. It is entirely possible, even probable, that most people here don't actually get all subtleties of Popper's worldview. But then, a better strategy may be to first write a post which explains these subtleties and tells why they are important. On the other hand, you don't need to tell us explicitly "you are unscholarly and misinterpret Popper". If you actually explain what you ought to (and if you are right about the issue), people here will likely understand that they were previously wrong, and they will do it without feeling that you seek confrontation rather than truth - which I mildly have.

4Desrtopa13y
Upvoted and agreed. I feel at this point like further addressing the discussion on present terms would be simply irresponsible, more likely to become adversarial than productive. If curi wrote up such a post, it would hopefully give a meaningful place to continue from. Edit: It seems that curi has created such a post. I'm not entirely convinced that continuing the discussion is a good idea, but perhaps it's worth humoring the effort.
2TheOtherDave13y
For what it's worth, I have that feeling more than mildly and consequently stopped paying attention to the curi-exchange a while ago. Too much heat, not enough light. I've been considering downvoting the whole thread on the grounds that I want less of it, but haven't yet, roughly on the grounds that I consider it irresponsible to do so without paying more careful attention to it and don't currently consider it worth paying more attention to.
-1curi13y
By "properly explaining my position" I'm not sure what you want. Properly understanding it takes reading, say, 20 books (plus asking questions about them as you go, and having critical discussions about them, and so on). If I summarize, lots of precision is lost. I have tried to summarize. I can't write "a (one) post" that explains the subtitles of Popper. It took Popper a career and many books. Bayesianism has a regress/foundations problem. Yudkowsky acknowledges that. Popperism doesn't. So Popperism is better in a pretty straightforward way. But they were propagating myths about Popper. They were unscholarly. They didn't know wtf they were talking about, not even the basics. Basically all of Popper's books contradict those myths. It's really not cool to attribute positions to someone he never advocated. This mistake is easy to avoid by the method: don't publish about people you haven't read. Bad scholarship is a big deal, IMO.
5Desrtopa13y
Any system with axioms can be infinitely regressed or rendered circular if you demand that it justify the axioms. Critical Rationalism has axioms, and can be infinitely regressed. You were upvoted in the beginning for pointing out gaps in scholarship and raising ideas not in common circulation here. You yourself, however, have demonstrated a clear lack of understanding of Bayesianism, and have attracted frustration with your own lack of scholarship and confused arguments, along with failure to provide good reasons for us to be interested in the prospect of doing this large amount of reading you insist is necessary to properly understand Popper. If doing this reading were worthwhile, we would expect you to be able to give a better demonstration of why.
-12curi13y
3prase13y
I acknowledge that, although I would have prefered if you did that before you have written this post. Could be five posts. Even if such a defense can be sometimes valid, it is too often used to defend confused positions (think about theology) to be much credible.
-1curi13y
It would need to be 500 posts. But anyway, they are written and published. By Popper not me. They already exist and they don't need to be published on this particular website.
5prase13y
Following your advice expressed elsewhere, isn't the fact that the basics of Popperianism cannot be explained in five posts a valid criticism of Popperianism, which should be therefore rejected?
1curi13y
Why is that a criticism? What's wrong with that? Also maybe it could be. But I don't know how. And the basics could be explained quickly, to someone who didn't have a bunch of anti-Popperian biases, but people do have those b/c they are built into our culture. And without the details and precision then people complain about 1) not understanding how to do it, what it says 2) it not having enough precision and rigor
4prase13y
Actually I don't know what constitutes a criticism in your book (since you never specified), but you have also said that there are no rules for criticism, so I suppose that it is a criticism. If not, then please say why it is not a criticism. I am not going to engage in a discussion about my and your biases, since such debates rarely lead to an agreement.
-1curi13y
You can conjecture standards of criticism, or use the ones from your culture. If you find a problem with them, you can change them or conjecture different ones. For many purposes I'm pretty happen with common sense notions of standards of criticism, which I think you understand, but which are hard to explain in words. If you have a relevant problem with the, you can say it.
5[anonymous]13y
One thing you could do is write a post highlighting a specific example where Bayes is wrong and Popper is right. A lot of people have asked for specific examples in this thread; if you could give a detailed discussion of one, that would move the discussion to more fertile ground.
1curi13y
Can you give me a link to a canonical essay on Bayesian epistemology/philosophy, and I'll pick from there? Induction and justificationism are examples but I've been talking about them. I think you want something else. Not entirely sure what.
1[anonymous]13y
It's not at all canonical, but a paper that neatly summarizes Bayesian epistemology is "Bayesian Epistemology" by Stephan Hartmann and Jan Sprenger.
1curi13y
Found it. http://www.stephanhartmann.org/HartmannSprenger_BayesEpis.pdf Will take a look in a bit.
1[anonymous]13y
Excellent, thanks.
-12curi13y

The assumptions behind Cox's theorem are:

  1. Representation of degrees of plausibility by real numbers
  2. Qualitative correspondence with common sense
  3. Consistency

Would you please clearly state which of these you disagree with, and why? And if you disagree with (1), is it because you don't think degrees of plausibility should be represented, or because you think they should be represented by something other than real numbers, and if so, then what? (Please do not give an answer that cannot be defined precisely by mapping it to a mathematical set. And please do not suggest a representation that is obviously inadequate, such as booleans.)

2curi13y
Could you explain what you're talking about a bit more? For example you state "consistency" as an assumption. What are you assuming is (should be?) consistent with what?

You may have valid points to make but it might help in getting people to listen to you if you don't exhibit apparent double standards. In particular, your main criticism seems to be that people aren't reading Popper's texts and related texts enough. Yet, at the same time, you are apparently unaware of the basic philosophical arguments for Bayesianism. This doesn't reduce the validity of anything you have to say but as an issue of trying to get people to listen, it isn't going to work well with fallible humans.

-11curi13y
5Larks13y
If only Jaynes had clearly listed them on page 114!
5jimrandomh13y
Cox's theorem is a proof of Bayes rule, from the conditions above. "Consistency" in t his context means (Jaynes 19): If a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result; we always take into account all of the evidence we have relevant to a question; and we always represent equivalent states of knowledge by equivalent plausibility assignments. By "reason in more than one way", we specifically mean adding the same pieces of evidence in different orders. (Edit: It's page 114 in the PDF you linked. That seems to be the same text as my printed copy, but with the numbering starting in a different place for some reason.)
-8[anonymous]13y

I don't understand Popper's work beyond the Wikipedia summary of critical rationalism. That summary, as well as the debate here at LW, appear to be confused and essentially without value. If this is not the case, you should update this post to include not just a description of how supporters of Bayesianism don't understand Popper, but why they should care about this discussion--why Bayesianism is not, as it seems, obviously the correct answer to the question Popper is trying to answer.

If you want to make bets about the future, Bayesianism will beat whateve... (read more)

0curi13y
FYI that won't work. Wikipedia doesn't understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages). I posted criticisms of Jaynes' arguments (or more accurately, his assumptions). I posted an argument about support. Why don't you answer it? You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it! Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books. If you want somewhere to start online, you could read http://fallibleideas.com/ That is not primarily what we want. And what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting). Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or: Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn't have, such as not having arbitrary foundations. What you're doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that's bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes's theorem at all). Here let me ask you a question: has any Bayesian ever published any substantive criticism of an

Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right?

Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don't have to spend time thinking about it, because it is solved. I would not generally criticize a rival's ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.

Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say?

Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn't object to anything in his philosophy, because it has essentially no content conc... (read more)

2curi13y
You don't think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider). But you haven't provided any argument that Popper in particular was confused, irrefutable, or whatever. I don't know about you, but as someone who wants to improve my epistemological knowledge I think it's important to consider all the major ideas in the field at the very least enough to know one good criticism of each. Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you're done with thinking, you have the final truth, and that's that..? Popper published several of those. Where's the response from Bayesians? One thing to note is it's hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...) Popper solved that problem. The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called "justificationism" by Popperians, and are criticized in detail. I think you ought to be interested in this. One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn't exist automatically but is being created by your own assumptions. What are you talking about? You haven't read his books and claim he didn't give enough detail? He was something of a workaholic who didn't watch TV, didn't have a big social life, and worked and wrote all the time. To create knowledge, includ
0David_Allen13y
Given well defined contexts and meanings for good and bad I don't see why Bayesianism could not be effectively applied to to moral problems.
-1curi13y
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't. You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
3JoshuaZ13y
You've repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you've shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let's work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
-5curi13y
0David_Allen13y
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology. Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction. This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making. These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches. Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
-1curi13y
Sorry. I have no idea who is who. Don't mind me. The Popperian method is universal. Well, umm, yes but that's no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don't know how to make it do that stuff. Epistemology should help us. Example or details?
0David_Allen13y
No problem, I'm just pointing out that there are other perspectives out here. Sure, in the sense it is Turing complete; but that doesn't make it the most efficient approach for all cases. For example I'm not going to use it to decide the answer to the statement "2 + 3", it is much more efficient for me to use the arithmetic abstraction. Agreed, it is one of the reasons that I am actively working on epistemology. The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as "good" or "bad" based on example input. Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures. Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
0curi13y
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality). Not in the sense that it's Turing complete so you could, by a roundabout way and using whatever methods, do anything. I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example: You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.
0David_Allen13y
Perhaps I don't understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it. But to respond to what I think you mean... If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in "justificationist epistemologies"... i.e. the halting problem. Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge. I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk -- Popper didn't fix this and neither will Bayesianism, it is more of a people problem -- but philosophy has also produced and is producing a lot of interesting and good ideas. I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper's approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context. This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions. A question like "what is moral objectively?" is easy. Nothing is "moral objectively". Meaning is created within contexts of assessment; if you want to know if something is "moral" you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning
2JoshuaZ13y
Minor nitpick at least capable of modeling any Turing machine, not Turing complete. For example, something that had access to some form of halting oracle would be able to do more than a Turing machine.
0[anonymous]13y
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology. Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction. This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making. These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches. Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
-5[anonymous]13y
3Peterdjones13y
Why don't you fix the WP article?
2paulfchristiano13y
Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone's particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected. In particular, if you ever try to make a claim of the form "You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief" then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision. Edit: changed wording to be less of an ass.
2curi13y
In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too? Given that I do make claims on my website, I wonder why you don't pick one and point out something you think is wrong with it.
3paulfchristiano13y
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!) I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion. If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments--its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don't think this is necessarily a flaw with your website--presumably it was not designed first and foremost as a response to Bayesianism--but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way. To be clear, what I am looking for is a statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage." Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been "wrong" where Popper would be clearly "right" at any historical point would be good enough to argue about.
-2curi13y
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis. I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss. Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.
4Desrtopa13y
It doesn't take Popperian epistemology to learn social fluency. I've learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology. If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.
1JoshuaZ13y
That's a claim that only makes sense in certain epistemological systems...
4curi13y
I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
-1JoshuaZ13y
Hmm? I'm not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That's not to say I understand how they think it is implied/ok in a Popperian framework).
-6curi13y
3paulfchristiano13y
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you. Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can't listen to an mp3 right now. My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren't the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn't? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion? I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don't yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes? This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want. I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
0curi13y
e.g. by pointing out that whether we do or don't survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can't make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all. You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There's no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theo
8jake98772213y
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you've described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared. First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don't actually make a decision and the problem remains unsolved. Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That's fine, but then it's not clear that you've done anything different from what a Bayesian would have done--you've simply avoided explicitly talking about things like probabilities and priors. Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
1curi13y
When you have exactly one non-refuted theory, you go with that. The other cases are more complicated and difficult to understand. Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede? If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere? If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first. But some are criticized and some aren't. But how is that to be judged? No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.). Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most. Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.
0jake98772213y
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can't reasonably guarantee that I will not have later objections as well before we've even had the discussion! So let me see if I'm understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we -- I don't want to say "assume that it's true," because that's probably not correct -- we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right? I'm not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, "in order to make a decision, we need to have a guiding theory which is currently impervious to criticism" (my current understanding of Popper's idea, roughly illustrated), isn't this just another way of saying: "the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?" In short, isn't imperviousness to criticism a type of justification in itself?
3curi13y
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place? That is the general idea (but incomplete). The reason we behave as if it's true is that it's the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn't want to act on an idea that we (thought we) saw a mistake in, over one we don't think we see any mistake with -- we should use what (fallible) knowledge we have. A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general). One reason that not being criticized isn't a justification is that saying it is gets you a regress problem. So let's not say that! The other reason is: what would that be adding as compared with not saying it? It's not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those). Terminology isn't terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don't like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.
0jake98772213y
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it's not clear that this discussion warrants an entirely new topic. Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by "justificationist" systems. It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper's system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper's system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate. According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question--but as you can see, the regress has begun.
4curi13y
I think it's a big topic. Began answering your question here: http://lesswrong.com/r/discussion/lw/551/popperian_decision_making/
3curi13y
No regress has begun. I already answered why: Try to regress me. It is possible, if you want, to create a regress of some kind which isn't the same one and isn't important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won't regard it as a real regress problem of the same type. You'll probably wonder how that's evaluated, but, well, it's not such a big deal. We'll quickly get to the point where your attempts to create regress look silly to you. That's different than the regresses inductivists face where it's the person trying to defend induction who runs out of stuff to say.
0Larks13y
You're equivocating between "knowing exactly the contents of the new knowledge", which may be impossible for the reason you describe, and "know some things about the effect of the new knowledge", which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.
1timtyler13y
That's because to a Bayesian, these things are the same thing. Epistemology is all about probability - and visa versa. Bayes's theorem includes induction and confirmation. You can't accept Bayes's theorem and reject induction without crazy inconsistency - and Bayes's theorem is just the math of probability theory.
0[anonymous]13y
If I understand correctly, I think curi is saying that there's no reason for probability and epistemology to be the same thing. That said, I don't entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these "epistemological problems" that Popper solves but Bayes doesn't?

There's an associated problem here that may be getting ignored: Popper isn't a terribly good writer." The Logic of Scientific Discovery" was one of the first phil-sci books I ever read and it almost turned me off of phil-sci. This is in contrast for example with Lakatos or Kuhn who are very readable. Some of the difficulty with reading Popper and understanding his viewpoints is that he's just tough to read.

That said, I think that chapter 3 of that books makes clear that Popper's notion of falsification is more subtle than what I would call &quo... (read more)

0curi13y
I do not find Popper hard to read. Did you read his later books? He does explain his position. One distinguishing difference is that Popper is not a justificationist and they are. Tell me if you don't know what that means.

I gave a description of how a Bayesian sees the difference between "X supports Y" and "X is consistent with Y" in our previous discussion. I don't know if you saw it, you havn't responded to it and you aren't acting like you accepted it so I'll give it again here:

"X is consistent with Y" is not really a Bayesian way of putting things, I can see two ways of interpreting it. One is as P(X&Y) > 0, meaning it is at least theoretically possible that both X and Y are true. The other is that P(X|Y) is reasonably large, i.e.

... (read more)
0curi13y
I missed your comment. I found it now. I will reply there. http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3uld?context=1#3uld No. The negation of a universal theory is not universal, and the negation of an explanatory theory is not explanatory. So, the interesting theories would still be criticism only, and the uninteresting ones (e.g. "there is a cat") support only. And the meaning of "support" is rather circumscribed there. If you want to say theories of the type "the following explanation isn't true: ...." get "supported" it doesn't contribute anything useful to epistemology. the support idea, as it is normally conceived, is still wrong, and this rescues none of the substance. The other issue is that criticism isn't the same kind of thing as support. It's not in the same category of concept. Yes I really reject the policeman's syllogism. In the sense of: I don't think the argument in the book is any good. There are other arguments which are OK for reaching the conclusion (but which rely on things the book left unstated, e.g. background knowledge and context. Without adding anything at all, no cultural biases or assumptions or hidden claims, and even doing our best to not use the biases and assumptions built into the English language, then no there isn't any way to guess what's more likely).
-1Peterdjones13y
If the Policeman's argument is only valid in the light of background assumptions, why would they need to be stated? Surely we would only need to make the same tacit assumptions to agree with the conclusions. Everyday reasoning differs from formal logic in various ways, and mainly because it takes short cuts. I don't think that invalidates it.

A huge strength of Bayesian epistemology is that it tells me how to program computers to form accurate beliefs. Has Popperian epistemology guided the development of any computer program as awesome as Gmail's spam filter?

-3curi13y
Bayesian epistemology didn't do that. Bayes' theorem did. See the difference?
5JGWeissman13y
Bayes' theorem is part of probability theory. Bayesian epistemology essentially says to take probability theory seriously as a normative description of degrees of belief. If you don't buy that and really want to split the hair, then I am willing to modify my question to: Has the math behind Popperian epistemology guided the development of any computer program as awesome as Gmail's spam filter? (Is there math behind Popperian epistemology?)
-4curi13y
gmail's spam filter does not have degrees of belief or belief. It has things which you could call by those words if you really wanted to. But it wouldn't make them into the same things those words mean when referring to people.
7JGWeissman13y
I want the program to find the correct belief, and then take good actions based on that correct belief. I don't care if lacks the conscious experience of believing. You are disputing definitions and ignoring my actual question. Your next reply should answer the question, or admit that you do not know of an answer.
5Alicorn13y
Augh, this reminded me of a quote that I can't seem to find based on my tentative memory of its component words... it was something to the effect that we anthropomorphize computers and talk about them "knowing" things or "communicating" with each other, and some people think that's wrong and they don't really do those things, and the quote-ee was of the opinion that computers were clarifying what we meant by those concepts all along. Anybody know what I'm talking about?
0curi13y
To be clear, I think computers can do those things and AIs will, and that will help clarify the concepts a lot. But I don't think that microsoft word does it. Nor any game "AI" today. Nor gmail's spam filter which just does mindlessly math.

It has occurred to me before that the lack of a proper explanation on LessWrong of Bayesian epistemology (and not just saying` "Here's Bayes' theorem and how it works, with a neat Java applet") is a serious lack. I've been reduced to linking the Stanford Encyclopedia of Philosophy article, which is really not well written at all.

It is also clear from the comments on this post that people are talking about it without citable sources, and are downvoting as a mark of disagreement rather than anything else. This is bad as it directly discourages thou... (read more)

0benelliott13y
I don't know if these are what you're looking for but: Probability Theory: The Logic of Science by Jaynes, spends its first chapter explaining why we need a 'calculus of plausibility' and what such a calculus should hope to achieve. The rest of the book is mostly about setting it up and showing what it can do. (The link does not contain the whole book, only the first few chapters, you may need to buy or borrow it to get the rest). Yudkowsky's Technical explanation, which assumes the reader is already familiar with the theorem, explains some of its implications for scientific thinking in general.
3David_Gerard13y
See here for what I see the absence of. There's a hole that needs filling here.
[-][anonymous]13y40

The naturalist philosopher Peter Godfrey Smith said this of Popper's position:

[F]or Popper, it is never possible to confirm or establish a theory by showing its agreement with observations. Confirmation is a myth. The only thing an observational test can do is to show that a theory is false...Popper, like Hume, was an inductive skeptic, and Popper was skeptical about all forms of confirmation and support other than deductive logic itself...This position, that we can never be completely certain about factual issues, is often known as fallibilism...Accordi

... (read more)
4curi13y
No. To start with, it's extremely incomplete. It doesn't really discuss what Popper's position is. It just makes a few scattered statements which do not explain what Popper is about. The word "show" is ambiguous in the phrase "show that a theory is false". To a Popperian, equivocation over the issue of what is meant there is an important issue. It's ambiguous between "show definitively" and "show fallibly". The idea that we can show a theory is false by an experimental test (even fallibly) is also, strictly, false, as Popper explained in LScD. When you reach a contradiction, something in the whole system is false. It could be an idea you had about how to measure what you wanted to measure. There's many possibilities. It's right there in LScD on page 56. I think it's in most of his other books too. I am familiar with the field and know of no competent Popper scholars who say otherwise. Anyone publishing to the contrary is simply incompetent, or believed low quality secondary sources without fact checking them. You have misinterpreted when you took "falsify them" to mean "falsify them with certainty". Popper is a fallibilist. This does not even attempt to address important problems in epistemology such as how explanatory or philosophical knowledge is created.
3[anonymous]13y
I'll agree that Godfrey-Smith's definition is incomplete, but I don't think it really matters for the purpose of this discussion: I've already said I agree that Popper did not believe in certain confirmation, and this seems to be your main problem with this quote and with the ones other people gave. You wrote: No, that is not what I meant at all. What I meant was, Popper was content with the fact that experimental evidence can say that something is probably false. If he wasn't, he wouldn't have included this his view of science as a process. So even though Popper was a falibilist, he thought that when an experimental result argued against a hypothesis, it was good enough for science. Next: Yes, this is the old "underdetermination of theory by data" problem, which Solomonoff Induction solves--see the coinflipping example here. Moving on, you wrote: Would you mind elaborating on this? What specific problems are you referring to here?
5curi13y
That is not Popper's position. That is not even close. In various passages he explicitly denies it like "not certain or probable". To Popper, the claims that the evidence tells us something is certainly true, or probably true, are cousins which share an underlying mistake. You're assuming Popper would agree with you about probability without reading any of his passages on probability in which he, well, doesn't. Arguing what books say with people who haven't read them gets old fast. So how about you just imagine a hypothetical person who had the views I attribute to Popper and discuss that? For example, the answers to all questions that have a "why" in them. E.g. why is the Earth roughly spherical? Statements with "because" (sometimes implied) is a pretty accurate way to find explanations, e.g. "because gravity is a symmetrical force in all directions". Another example is all of moral philosophy. Another example is epistemology itself, which is a philosophy not an empirical field. Yes This does not solve the problem to my satisfaction. It orders theories which make identical predictions (about all our data, but not about the unknown) and then lets you differentiate by that order. But isn't that ordering arbitrary? It's just not true that short and simple theories are always best; sometimes the truth is complicated.
3jimrandomh13y
For a formal mathematical discussion of these sorts of problems, read Causality by Judea Pearl. He reduces cause to a combination of conditional independence and ordering, and from this he defines algorithms for discovering causal models from data, predicting the effect of interventions and computing counterfactuals.
2curi13y
Could you give a short statement of the main ideas? How can morality be reduced to math? Or could you say something to persuade me that that book will address the issues in a way I won't think misses the point? (e.g. by showing you understand what I think the point is, otherwise I won't except you to be able to judge if it misses the point in the way I would).
2jimrandomh13y
Sorry, I over-quoted there; Pearl only discusses causality, and a little bit of epistemology, but he doesn't talk about moral philosophy at all. His book is all about causal models, which are directed graphs in which each vertex represents a variable and each edge represents a conditional dependence between variables. He shows that the properties of these graphs reproduce what we intuitively think of as "cause and effect", defines algorithms for building them from data and operating on them, and analyzes the circumstances under which causality can and can't be inferred from the data.
3curi13y
I don't understand the relevance.
3jimrandomh13y
Your quote seemed to be saying that that Bayesianism couldn't handle why/because questions, but Popperian philosophy could. I mentioned Pearl as a treatment of that class of question from a Bayes-compatible perspective.
2curi13y
Causality isn't explanation. X caused Y isn't the issue I was talking about. For example, the statement "Murder is bad because it is illiberal" is an explanation of why it is bad. It is not a statement about causality. You may say that "illiberal" is a short cut for various other ideas. And you may claim that eventually that reduce away to causal issues. But that would be reductionism. We do not accept that high level concepts are a mistake or that emergence isn't important.
-1JoshuaZ13y
Huh? It may be that I haven't read Logic of Scientific Discovery in a long time, but as far as I remember/can tell, Popper doesn't care about moral whys like "why is murder bad" at all. That seems to be an issue generally independent of both Bayesian and Popperian epistemology. One could be a Bayesian and be a utilitarian, or a virtue ethicist, or some form of deontologist. What am I missing?
4curi13y
He doesn't discuss them in LScD (as far as I remember). He does elsewhere, e.g. in The World of Parmenides. Whether he published moral arguments or not, his epistemology applies to them and works with them -- it is general purpose. Epistemology is about how we get knowledge. Any epistemology which can't deal with entire categories of knowledge has a big problem. It would mean a second epistemology would be needed for that other category of knowledge. And that would raise questions like: if this second one works where the first failed, why not use it for everything? Popper's method does not rely on only empirical criticism but also allows for all types of philosophical criticism. So it's not restricted to only empirical issues.
1ShardPhoenix13y
You seem to be assuming that "morality" is a fact about the universe. Most people here think it's a fact about human minds. (ie we aren't moral realists, at least not in the sense that a religious person is).
-4curi13y
Yes, morality is objective. I don't want to argue terminology. There are objective facts about how to live, call them what you will. Or, maybe you'll say there aren't. If there are, then it's not objectively wrong to be a mass murderer. Do you really want to go there into full blown relativism and subjectivism?
6ShardPhoenix13y
Well, that's just like, your opinion, man. Seriously: Morality is in the brain. Murder is "wrong" because I, and people sufficiently similar to me, don't like it. There's nothing more objective about it than any of my other opinions and desires. If you can't even agree on this, then coming here and arguing is hopeless - you might as well be a Christian and try to tell us to believe in God.
0zaph13y
Well stated. And I would further add that there are issues with significant minority interests that staunchly disagree with majority opinion. Take the debates on homosexual marriage or abortion. The various sides have such different viewpoints that there isn't a common ground where any agreeably objective position can be reached. The "we all agree mass murder is wrong" is a cop out, because it implies all moral questions are that black and white. And even then, if it's such a universal moral, why does it happen in the first place? In the brain based morality model, I can say Dennis Rader's just a substantially different brain. With universal morality, you're stuck with the problem of people knowing something is wrong, but doing it anyway.
-2[anonymous]13y
Actually, one of the reason I stood by this interpretation of Popper was because one of the quotes posted in one of the other threads here: Which is apparently from Conjectures and Refutations, pg 309. Regardless, I don't care about this argument overmuch, since we seem to have moved on to some other points. Remember that in Bayesian epistemology, probabilities represent our state of knowledge, so as you pointed out, the simplest hypothesis that fits the data so far may not be the true one because we haven't seen all of the data. But it is necessarily our best guess because of the conjunction rule.
2curi13y
There are so many problems here that it's hard to choose a starting point. 1) the data set you are using is biased (it is selective. all observation is selective) 2) there is no such thing as "raw data" -- all your observations are interpreted. your interpretations may be mistaken. 3) what do you mean by "best guess"? one meaning is "most likely to be the final, perfect truth". but a different meaning is "most useful now". 4) You say "probabilities represent our state of knowledge". However there are infinitely many theories with the same probability. Or there would be, except for your solomonoff prior about simpler theories having higher probability. So the important part of "state of our knowledge" as represented by these probabilities consists mostly of the solomonoff prior and nothing else, because it, and it alone, is dealing with the hard problem of epistemology (dealing with theories which make identical predictions about everything we have data for). 5) you can have infinite data and still get all non-emprical issues wrong 6) regarding the conjunction rule, there is miscommunication. this does not address the point i was trying to make. i think you have a premise like "all more complicated theories are merely conjunctions of simpler theories". But that is to conceive of theories very differently than Popperians do, in what we see as a limited and narrow way. To begin to address these issues, let's consider what's better: a bald assertion, or an assertion and an explanation of why it is correct? If you want "most likely to happen to be the perfect, final truth" you are better off with only an unargued assertion (since any argument may be mistaken). But if you want to learn about the world, you are better off not relying on unargued assertions.
0JoshuaZ13y
You are going to have to expand on this. I don't see how the conjunction rule implies that simpler hypotheses are in general more probable. This is true if we have two hypotheses where one is X and the other is "X and Y" but that's not how people generally apply this sort of thing. For example, I might have a sequence of numbers that for the first 10,000 terms has the nth term as the nth prime number. One hypothesis is that the nth term is always the nth prime number. But I could have as another hypothesis some high degree polynomial that matches the first 10,000 primes. That's clearly more complicated. But one can't use conjunction to argue that it is less likely.
0[anonymous]13y
Imagine that I have some set of propositions, A through Z, and I don't know the probabilities of any of these. Now let's say I'm using these propositions to explain some experimental result--since I would have uniform priors for A through Z, it follows that an explanation like "M did it" is more probable than "A and B did it," which in turn is more probable than "G and P and H did it."
0JoshuaZ13y
Yes, I agree with you there. But this is much weaker than any general form of Occam. See my example with primes. What we want to say in some form of Occam approach is much stronger than what you can get from simply using the conjunction argument.
0falenas10813y
Sorry, didn't see you posted this before I replied too...
0[anonymous]13y
Actually, I'm glad you replied as well--the more quotes about/by Popper that we unearth, the more accurate we will be.

The thing intended as the proof is most of chapter 2. I dislike Jaynes' assumptions there, since I find many of them superfluous compared to other proofs. You probably like them even less, since one is "Representation of degrees of plausibility by real numbers".

3curi13y
It cannot be a proof of Bayesian epistemology itself if it makes assumptions like that. It is merely a proof of some theorems in Bayesian epistemology given some premises that Bayesians like. If you have a different proof which does not make assumptions I disagree with, then let's hear it. Otherwise you can give up on proving and start arguing why I should agree with your starting points. Or maybe even, say, engaging with Popper's arguments and pointing out mistakes in them (if you can find any).
0Peterdjones13y
You are complaining it is not a deduction of Bayes from no assumpyions whatever. But all it needs to be is that those assumptions can be made to "work"--ie applied without con tradiction, qoudliber or other disaster.
-2Peterdjones13y
Remember, Popper himself said it all starts with common sense.
-2endoself13y
I agree that it is by no means a complete proof of Bayesian epistemology. The book I pointed you to might have a more complete one, though I doubt it will be complete since it seems more like a book about using statistics than about rigourously understanding epistemology. I am currently collecting the necessary knowledge to write the full proof myself, if it is possible (not because of this debate, because I kept being annoyed by unjustified assumptions that didn't even seem necessary).
-1curi13y
Good luck. But, umm, do you have some argument against fallibilism? Because you're going to need one.
0endoself13y
I think I massively overstated my intention. I meant the full proof of the stuff we know; the thing I think could be in Mathematical Statistics, Volume 1: Basic and Selected Topics. Anyways, I think I accept fallibilism, at least from the Wikipedia page. Why do you think I don't? This is understandable, because I've been talking about idealized agents a lot more than about humans actually applying Bayesianism.
0curi13y
I think you are not a fallibilist because you want to prove philosophical ideas. But we can't have certainty. So what do you even think it means to "prove" them? Why do you want to prove them instead of give good arguments on the matter?
0endoself13y
I use the word prove because I'm doing it deductively in math. I already linked you to the 2+2=3 thing, I believe. Also, the question of how I would, for example, change AI design if a well-known theorem is wrong (pretend it is the future and the best theorems proving Bayesianism are better-known and I am working on AI design) is both extremely hard to answer and unlikely to be necessary. Well unlikely is the wrong word; what is P(X | "There are no probabilities")? :)
1calef13y
Probably the most damning criticism you'll find, curi, is that fallibilism isn't useful to the Bayesian. The fundamental disagreement here is somewhere in the following statement: "There exist true things, and we have a means of determining how likely it is for any given statement to be true. Furthermore, a statement that has a high likelihood of being true should be believed over a similar statement with a lower likelihood of being true." I suspect your disagreement is in one of several places. 1) You disagree that there even exist epistemically "true" facts. 2) That we can determine how likely something is to be true. or 3) That likelihood of being true (as defined by us) is reason to believe the truth of something. I can actually flesh out your objections to all of these things. For 1, you could probably successfully argue that we aren't capable of determining if we've ever actually arrived at a true epistemic statement because real certainty doesn't exist, thus the existence or nonexistence of true epistemic statements is on the same epistemological footing as the existence of God--i.e. shaky to the point of not concerning oneself with them all together. 2 basically ties in with the above directly. 3 is a whole 'nother ball game, and I don't think it's really been broached yet by anyone, but it's certainly a valid point of contention. I'll leave it out unless you'd like to pursue it. The Bayesian counter to all of these is simply, "That doesn't really do anything for me." Declaring we have certainty, and quantifying it as best we can is incredibly useful. I can pick up an apple and let go. It will fall to the ground. I have an incredibly huge amount of certainty in my ability to repeat that experiment. That I cannot foresee the philosophical paradigm that will uproot my hypothesis that dropped apples fall to the ground is not a very good reason to reject my relative certainty in the soundness of my hypothesis. Such a apples-aren't-falling-when-dropped
2endoself13y
Small nitpick: I don't like your use of the word 'certainty' here. Especially in philosophy, it has too much of a connotation of "literally impossible for me to be wrong" rather than "so ridiculously unlikely that I'm wrong that we can just ignore it", which may cause confusion.
0calef13y
Where don't you like it? I don't think anyone actually argues for your first definition, because, like I said, it's silly. I think curi's point is that fallibilism is predicated on your second definition not (ever?) being a valid claim. My point is that the things we are "certain" about (as per your second definition) probably coincide almost exactly with "statements without criticism" as per curi's definition(s).
4endoself13y
It is a silly definition, but people are silly enough that I hear it often enough to be wary of it. I interpreted this as the first definition. I guess we should see what curi says.
-1Peterdjones13y
people genrally try to have their cake and eat it: they want certainty to mean "cannot be wrong", but only on the basis that they feel sure.
-7curi13y

Curi,

"Some first chapter assumptions are incorrect or unargued. It begins with an example with a policeman, and says his conclusion is not a logical deduction because the evidence is logically consistent with his conclusion being false."

Popper's epistemology doesn't explain that the conclusion of the argument has no validty, in the sense of being certainly false. In fact, it requires that the conclusion is not certainly false. No conjecture is certainly false.

Perhaps you meant he shows that the argument is invalid in the sense of being a non sequ... (read more)

3Peterdjones13y
"Science, philosophy and rational thought must all start from common sense". KRP, Objective Knowledge, p33. Starting with common sense is exactly what Jaynes is doing. (Popper says that what is important is not to take common sense as irrefutable).

If anyone can bear more of this, Poppers argument against induction using Bayes is being discussed here

"'What is support?' (This is not asking for its essential nature or a perfect definition, just to explain clearly and precisely what the support idea actually says) and 'What is the difference between "X supports Y" and "X is consistent with Y"?' If anyone has the answer, please tell me."

Bayesians appear to have answers to these questions. Moreovoer, far from wishing to refute Popper, they can actually incorporate a fomr of Popperianism.

"On the other hand, Popper's idea that there is only falsification and no such thing a... (read more)

From the research I have done in the last 5 minutes, it seems as though Popper believed that all good scientific theories should be subject to experiments that could prove them wrong.
Ex:

"the falsificationists or fallibilists say, roughly speaking, that what cannot (at present) in principle be overthrown by criticism is (at present) unworthy of being seriously considered; while what can in principle be so overthrown and yet resists all our critical efforts to do so may quite possibly be false, but is at any rate not unworthy of being seriously considered and perhaps even of being believed" -Popper

This seems to imply that theories can be proved false.

-6curi13y