curi comments on Bayesian Epistemology vs Popper - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (226)
FYI that won't work. Wikipedia doesn't understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages).
I posted criticisms of Jaynes' arguments (or more accurately, his assumptions). I posted an argument about support. Why don't you answer it?
You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it!
Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books.
If you want somewhere to start online, you could read
http://fallibleideas.com/
That is not primarily what we want. And what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).
Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or:
Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn't have, such as not having arbitrary foundations.
What you're doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that's bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes's theorem at all).
Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right? And if no one ever has, then you should be interested in investigating, right? And also interested in investigating what is wrong with your movement that it never addressed rival ideas in scholarly debate. (I have looked for such a criticism. Never managed to find one.)
Why don't you fix the WP article?
Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don't have to spend time thinking about it, because it is solved. I would not generally criticize a rival's ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.
Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn't object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).
The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.
Really this entire discussion comes down to what we want out of epistemology.
What do you want? I don't understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:
I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.
Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?
You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper's ideas aren't nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?
If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.
You don't think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).
But you haven't provided any argument that Popper in particular was confused, irrefutable, or whatever. I don't know about you, but as someone who wants to improve my epistemological knowledge I think it's important to consider all the major ideas in the field at the very least enough to know one good criticism of each.
Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you're done with thinking, you have the final truth, and that's that..?
Popper published several of those. Where's the response from Bayesians?
One thing to note is it's hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)
Popper solved that problem.
The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called "justificationism" by Popperians, and are criticized in detail. I think you ought to be interested in this.
One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn't exist automatically but is being created by your own assumptions.
What are you talking about? You haven't read his books and claim he didn't give enough detail? He was something of a workaholic who didn't watch TV, didn't have a big social life, and worked and wrote all the time.
To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I'm not assuming you actually are a positivist, but I'm not really seeing much difference yet.)
To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.
Neither. You can and should do better!
Given well defined contexts and meanings for good and bad I don't see why Bayesianism could not be effectively applied to to moral problems.
Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.
You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).
You've repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you've shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let's work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.
http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7
And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.
Consider that you are replying to a statement I just said that all you've done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?
First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.
Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.
This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.
These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.
Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.
Sorry. I have no idea who is who. Don't mind me.
The Popperian method is universal.
Well, umm, yes but that's no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don't know how to make it do that stuff. Epistemology should help us.
Example or details?
No problem, I'm just pointing out that there are other perspectives out here.
Sure, in the sense it is Turing complete; but that doesn't make it the most efficient approach for all cases. For example I'm not going to use it to decide the answer to the statement "2 + 3", it is much more efficient for me to use the arithmetic abstraction.
Agreed, it is one of the reasons that I am actively working on epistemology.
The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as "good" or "bad" based on example input.
Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.
Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"
No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality).
Not in the sense that it's Turing complete so you could, by a roundabout way and using whatever methods, do anything.
I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example:
You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.
Perhaps I don't understand some nuance of what you mean here. If you can explain it or link to something that explains this in detail I will read it.
But to respond to what I think you mean... If you have a method that can be applied to all types of knowledge, that implies that it is Turing complete; it is therefore equivalent in capability to other Turing complete systems; that also means it is susceptible to the infinite regresses you dislike in "justificationist epistemologies"... i.e. the halting problem.
Also, just because it can be applied to all types of knowledge does not mean it is the best choice for all types of knowledge, or for all types of operations on that knowledge.
I would not describe my perspective that way; you may have forgotten that I am a third party in this argument. I think that there is a lot of historical junk in philosophy and that it is continuing to produce a lot junk -- Popper didn't fix this and neither will Bayesianism, it is more of a people problem -- but philosophy has also produced and is producing a lot of interesting and good ideas.
I think one way we differ is that you see a distinct difference between math and philosophy and I see a wide gradient of abstractions for manipulating information. Another is that you think that there is something special about Popper's approach that allows it to rise above all other approaches in all cases, and I think that there are many approaches and that it is best to choose the approach based on the context.
This was a response to your request for an example; you read too much into it to assume it implies anything about my ambitions.
A question like "what is moral objectively?" is easy. Nothing is "moral objectively". Meaning is created within contexts of assessment; if you want to know if something is "moral" you must consider that question with a context that will perform the classification. Not all contexts will produce the same result and not all contexts will even support a meaning for the concept of "moral".
Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone's particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected.
In particular, if you ever try to make a claim of the form "You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief" then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision.
Edit: changed wording to be less of an ass.
In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too?
Given that I do make claims on my website, I wonder why you don't pick one and point out something you think is wrong with it.
Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)
I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.
If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments--its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don't think this is necessarily a flaw with your website--presumably it was not designed first and foremost as a response to Bayesianism--but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.
To be clear, what I am looking for is a statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage." Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been "wrong" where Popper would be clearly "right" at any historical point would be good enough to argue about.
Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks
While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.
I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.
Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.
It doesn't take Popperian epistemology to learn social fluency. I've learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.
If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.
That's a claim that only makes sense in certain epistemological systems...
I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.
Hmm? I'm not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That's not to say I understand how they think it is implied/ok in a Popperian framework).
No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.
Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can't listen to an mp3 right now.
My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren't the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn't? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?
I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don't yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.
e.g. by pointing out that whether we do or don't survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can't make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There's no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
I think it's wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes' theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don't want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it's not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you've described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don't actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That's fine, but then it's not clear that you've done anything different from what a Bayesian would have done--you've simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
But some are criticized and some aren't.
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can't reasonably guarantee that I will not have later objections as well before we've even had the discussion!
So let me see if I'm understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we -- I don't want to say "assume that it's true," because that's probably not correct -- we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I'm not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, "in order to make a decision, we need to have a guiding theory which is currently impervious to criticism" (my current understanding of Popper's idea, roughly illustrated), isn't this just another way of saying: "the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?"
In short, isn't imperviousness to criticism a type of justification in itself?
You're equivocating between "knowing exactly the contents of the new knowledge", which may be impossible for the reason you describe, and "know some things about the effect of the new knowledge", which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.
That's because to a Bayesian, these things are the same thing. Epistemology is all about probability - and visa versa. Bayes's theorem includes induction and confirmation. You can't accept Bayes's theorem and reject induction without crazy inconsistency - and Bayes's theorem is just the math of probability theory.
If I understand correctly, I think curi is saying that there's no reason for probability and epistemology to be the same thing. That said, I don't entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these "epistemological problems" that Popper solves but Bayes doesn't?