You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

paulfchristiano comments on Bayesian Epistemology vs Popper - Less Wrong Discussion

-1 Post author: curi 06 April 2011 11:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (226)

You are viewing a single comment's thread.

Comment author: paulfchristiano 07 April 2011 12:19:41AM *  5 points [-]

I don't understand Popper's work beyond the Wikipedia summary of critical rationalism. That summary, as well as the debate here at LW, appear to be confused and essentially without value. If this is not the case, you should update this post to include not just a description of how supporters of Bayesianism don't understand Popper, but why they should care about this discussion--why Bayesianism is not, as it seems, obviously the correct answer to the question Popper is trying to answer.

If you want to make bets about the future, Bayesianism will beat whatever else you could use. To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts, if you define Bayesianism and "winning" precisely.

It seems like the only possible room for debate is the choice of prior. Everyone is forced to either implicitly choose a prior or else bet in a way that is manifestly irrational. This is also a mathematical fact. The Solomonoff prior provably isn't too bad. You just have to get over the arbitrariness.

Edit: Lets make this more precise. I claim that if we play a betting game, I can reconstruct a prior from your strategy such that a Bayesian using that prior will beat you in expectation. Do you object to this mathematical statement, or do you object to the interpretation of this fact as "Bayesianism is correct"? I'm not sure which side of the fence you are on, but I suppose it must be one or the other, so if we get that sorted out maybe we can make progress.

Comment author: curi 07 April 2011 12:30:32AM 1 point [-]

I don't understand Popper's work beyond the Wikipedia summary of critical rationalism

FYI that won't work. Wikipedia doesn't understand Popper. Secondary sources promoting myths, like Jaynes did, is common. A pretty good overview is the Popper book by Bryan Magee (only like 100 pages).

without value

I posted criticisms of Jaynes' arguments (or more accurately, his assumptions). I posted an argument about support. Why don't you answer it?

You just have to get over the arbitrariness.

You are basically admitting that your epistemology is wrong. Given that Popper has an epistemology which does not have this feature, and the rejections of him by Bayesians are unscholarly mistakes, you should be interested in it!

Of course if I wrote up his whole epistemology and posted it here for you that would be nice. But that would take a long time, and it would repeat content from his books.

If you want somewhere to start online, you could read

http://fallibleideas.com/

If you want to make bets about the future

That is not primarily what we want. And what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).

To suggest that something else is an improved method of doing science is nothing more than to suggest that it is a more feasible approximation to Bayesianism. These things are mathematical facts,

Are you open to the possibility that the general outline of your approach is itself mistaken, and there the theorems you have proven within your framework of assumptions are therefore not all true? Or:

It seems like the only possible room for debate is the choice of prior.

Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say? Even when they offer things your approach doesn't have, such as not having arbitrary foundations.

What you're doing is accepting ideas which have been popular since Aristotle. When you think no other ways are possible, that's bias talking. Your ideas have become common sense (not the Bayes part, but the philosophical approach to epistemology you are taking which comes before you use Bayes's theorem at all).

Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right? And if no one ever has, then you should be interested in investigating, right? And also interested in investigating what is wrong with your movement that it never addressed rival ideas in scholarly debate. (I have looked for such a criticism. Never managed to find one.)

Comment author: Peterdjones 15 April 2011 03:14:18PM 3 points [-]

Why don't you fix the WP article?

Comment author: paulfchristiano 07 April 2011 01:02:58AM 9 points [-]

Here let me ask you a question: has any Bayesian ever published any substantive criticism of an important idea in Popper's epistemology? Someone should have done it, right?

Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably. In the case of epistemology, virtually everything that has ever been said falls into this category. I am glad that I don't have to spend time thinking about it, because it is solved. I would not generally criticize a rival's ideas, because I no longer care. The problem is solved, and I can go work on things that still matter.

Are you so sure of yourself -- that you are right about many things -- that you will dismiss all rival ideas without even having to know what they say?

Once I know the definitive answer to a question, I will dismiss all other answers (rather than trying to poke holes in them). The only sort of argument which warrants response is an objection to my current definitive answer. So ignorance of Popper is essentially irrelevant (and I suspect I couldn't object to anything in his philosophy, because it has essentially no content concrete enough to be defeated by mere reasoning).

The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested, except that I am fairly confident the problem has no solution for what are essentially obvious reasons. Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.

Really this entire discussion comes down to what we want out of epistemology.

That [guiding betting] is not primarily what we want.

What do you want? I don't understand at all. Whatever you specify, I would be shocked if critical rationality provided it. Here is what I want, and maybe you will agree:

I want to decide between action A and action B. To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world. In particular, by choosing B instead of A, I am making a bet about the consequences of A and B. I would like to make such bets in the best possible way.

Lo! This is precisely what Bayesianism allows me to do. Why is there more to say?

You can object that it involves knowing a prior. But from the problem statement it is obvious (as a mathematical fact) that there is a universe in which each possible prior is the best one. Is there a strategy that does better than Bayesianism with a reasonable prior in all possible universes? Maybe, but Popper's ideas aren't nearly precise enough to answer the question (by which I mean, not even at the point where this question, to me clearly the most important one, is meaningful). Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?

If I have to bet, or make a decision that effects peoples lives which amounts to a bet, I am going to use Bayesianism, or a computational heuristic which I justify by Bayesianism. Doing something else seems irresponsible.

Comment author: curi 07 April 2011 02:07:06AM 1 point [-]

Most things in the space of possible documents can't be refuted, because they don't correspond to anything refutable. They are simply confused, and irredeemably.

You don't think confused things can be criticized? You can, for example, point out ambiguous passages. That would be a criticism. If they have no clarification to offer, then it would be (tentatively and fallibly) decisive (pending some reason to reconsider).

But you haven't provided any argument that Popper in particular was confused, irrefutable, or whatever. I don't know about you, but as someone who wants to improve my epistemological knowledge I think it's important to consider all the major ideas in the field at the very least enough to know one good criticism of each.

Refusing to address criticism because you think you already have the solution is very closed minded, is it not? You think you're done with thinking, you have the final truth, and that's that..?

The only sort of argument which warrants response is an objection to my current definitive answer.

Popper published several of those. Where's the response from Bayesians?

One thing to note is it's hard to understand his objections without understanding his philosophy a bit more broadly (or you will misread stuff, not knowing the broader context of what he is trying to say, what assumptions he does not share with you, etc...)

The real question, in fact the only question, is whether the arbitrariness of choosing a prior can be surmounted--whether my current answer is not actually definitive. If someone came to me and said they had a solution to this problem I would be interested

Popper solved that problem.

I am fairly confident the problem has no solution for what are essentially obvious reasons

The standard reasons seem obvious because of your cultural bias. Since Aristotle some philosophical assumptions have been taken for granted by almost everyone. Now most people regard them as obvious. GIven those assumptions, I agree that your conclusion follows (no way to avoid arbitrariness). The assumptions are called "justificationism" by Popperians, and are criticized in detail. I think you ought to be interested in this.

One criticism of justificationism is that it causes the regress/arbitrariness/foundations problem. The problem doesn't exist automatically but is being created by your own assumptions.

Popper avoids this problem by not even describing his epistemology precisely enough to express the difficulty.

What are you talking about? You haven't read his books and claim he didn't give enough detail? He was something of a workaholic who didn't watch TV, didn't have a big social life, and worked and wrote all the time.

What do you want?

To create knowledge, including explanatory and non-instrumentalist knowledge. You come off like a borderline positivist to me, who has trouble with the notion that non-empirical stuff is even meaningful. (No offense intended, and I'm not assuming you actually are a positivist, but I'm not really seeing much difference yet.)

To do this, I want to evaluate the consequences of action A and action B. To do this, I want to predict something about the world.

To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.

Should I use a theory which I understand and which has an apparently necessary flaw, or a theory which is underspecified and therefore "avoids" this difficulty?

Neither. You can and should do better!

Comment author: David_Allen 07 April 2011 04:16:38PM 0 points [-]

To take one issue, besides predicting the physical results of your actions you also need a way to judge which results are good or bad. That is moral knowledge. I don't think Bayesianism addresses this well.

Given well defined contexts and meanings for good and bad I don't see why Bayesianism could not be effectively applied to to moral problems.

Comment author: curi 07 April 2011 06:40:28PM 0 points [-]

Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.

You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).

Comment author: JoshuaZ 07 April 2011 06:58:15PM *  2 points [-]

You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).

You've repeatedly claimed that the Popperian approach can somehow address moral issues. Despite requests you've shown no details of that claim other than to say that you do the same thing you would do but with moral claims. So let's work through a specific moral issue. Can you take an example of a real moral issue that has been controversial historically (like say slavery or free speech) and show how the Popperian would approach? An concrete worked out example would be very helpful.

Comment author: curi 07 April 2011 07:00:42PM *  -2 points [-]

http://lesswrong.com/lw/552/reply_to_benelliott_about_popper_issues/3uv7

And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.

Comment author: JoshuaZ 07 April 2011 07:10:36PM 0 points [-]

And it creates moral knowledge by conjecture and refutation, same as any other knowledge. If you understand how Popper approaches any kind of knowledge (which I have written about a bunch here), then you know how he approaches moral knowledge too.

Consider that you are replying to a statement I just said that all you've done is say that it would use the same methodologies. Given that, does this reply seem sufficient? Do I need to repeat my request for a worked example (which is not included in your link)?

Comment author: David_Allen 07 April 2011 08:26:10PM 0 points [-]

Yes, given moral assertions you can then analyze them. Well, sort of. You guys rely on empirical evidence. Most moral arguments don't.

First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.

Bayes' theorem is an abstraction. If you don't have a reasonable way to transform your problem to a form valid within that abstraction then of course you shouldn't use it. Also, if you have a problem that is solved more efficiently using another abstraction, then use that other abstraction.

This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.

You can't create moral ideas in the first place, or judge which are good (without, again, assuming a moral standard that you can't evaluate).

These are just computable processes; if Bayesianism is in some sense Turing complete then it can be used to do all of this; it just might be very inefficient when compared to other approaches.

Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods. Other aspects should probably be accomplished using other methods.

Comment author: curi 07 April 2011 08:41:46PM 0 points [-]

First of all, you shouldn't lump me in with the Yudkowskyist Bayesians. Compared to them and to you I am in a distinct third party on epistemology.

Sorry. I have no idea who is who. Don't mind me.

This doesn't mean that Bayes' theorem is useless, it just means there are domains of reasonable usage. The same will be true for your Popperian decision making.

The Popperian method is universal.

if Bayesianism is in some sense Turing complete then it can be used to do all of this

Well, umm, yes but that's no help. my iMac is definitely Turing complete. It could run an AI. It could do whatever. But we don't know how to make it do that stuff. Epistemology should help us.

Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.

Example or details?

Comment author: David_Allen 07 April 2011 09:59:13PM 0 points [-]

Sorry. I have no idea who is who. Don't mind me.

No problem, I'm just pointing out that there are other perspectives out here.

The Popperian method is universal.

Sure, in the sense it is Turing complete; but that doesn't make it the most efficient approach for all cases. For example I'm not going to use it to decide the answer to the statement "2 + 3", it is much more efficient for me to use the arithmetic abstraction.

But we don't know how to make it do that stuff. Epistemology should help us.

Agreed, it is one of the reasons that I am actively working on epistemology.

Aspects of coming up with moral ideas and judging which ones are good would probably be accomplished well with Bayesian methods.

Example or details?

The naive Bayes classifier can be an effective way to classify discrete input into independent classes. Certainly for some cases it could be used to classify something as "good" or "bad" based on example input.

Bayesian networks can capture the meaning within interdependent sets. For example the meaning of words forms a complex network; if the meaning of a single word shifts it will probably result in changes to the meanings of related words; and in a similar way ideas on morality form connected interdependent structures.

Within a culture a particular moral position may be dependent on other moral positions, or even other aspects of the culture. For example a combination of religious beliefs and inheritance traditions might result in a belief that a husband is justified in killing an unfaithful wife. A Bayesian network trained on information across cultures might be able to identify these kinds of relationships. With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"

Comment author: curi 08 April 2011 12:37:39AM 0 points [-]

Sure, in the sense it is Turing complete;

No, in the sense that it directly applies to all types of knowledge (which any epistemology applies to -- which i think is all of them, but that doesn't matter to universality).

Not in the sense that it's Turing complete so you could, by a roundabout way and using whatever methods, do anything.

I think the basic way we differ is you have despaired of philosophy getting anywhere, and you're trying to get rigor from math. But Popper saved philosophy. (And most people didn't notice.) Example:

With this you could start to answer questions like "Why is X moral in the UK but not in Saudi Arabia?"

You have very limited ambitious. You're trying to focus on small questions b/c you think bigger ones like: what is moral objectively? are too hard and, since you math won't answer them, it's hopeless.

Comment author: paulfchristiano 07 April 2011 01:12:55AM *  2 points [-]

Having read the website you linked to in its entirety, I think we should defer this discussion (as a community) until the next time you explain why someone's particular belief is wrong, at which point you will be forced to make an actual claim which can be rejected.

In particular, if you ever try to make a claim of the form "You should not believe X, because Bayesianism is wrong, and undesirable Y will happen if you act on this belief" then I would be interested in the resulting discussion. We could do the same thing now, I guess, if you want to make such a claim of some historical decision.

Edit: changed wording to be less of an ass.

Comment author: curi 07 April 2011 01:24:01AM 2 points [-]

In its entirety? Assuming you spent 40 minutes reading, 0 minutes delay before you saw my post, 0 minutes reading my post here, and 2:23 writing your reply, then you read at a speed of around 833 words per minute. That is very impressive. Where did you learn to do that? How can I learn to do that too?

Given that I do make claims on my website, I wonder why you don't pick one and point out something you think is wrong with it.

Comment author: paulfchristiano 07 April 2011 01:33:16AM *  2 points [-]

Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)

I did read the parts of your website that relate to the question at hand. I do skim at several hundred words per minute (in much more detail than was needed for this application), though I did not spend the entire time reading. Much of the content of the website (perfectly reasonably) is devoted to things not really germane to this discussion.

If you really want (because I am constitutively incapable of letting an argument on the internet go) you could point to a particular claim you make, of the form I asked for. My issue is not really that I have an objection to any of your arguments--its that you seem to offer no concrete points where your epistemology leads to a different conclusion than Bayesianism, or in which Bayesianism will get you into trouble. I don't think this is necessarily a flaw with your website--presumably it was not designed first and foremost as a response to Bayesianism--but given this observation I would rather defer discussion until such a claim does come up and I can argue in a more concrete way.

To be clear, what I am looking for is a statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage. I would not believe such a thing because of my improved epistemology. Here is why my belief is more correct, and why your belief will do damage." Or whatever example it is you would like to use. Any example at all. Even an argument that Bayesian reasoning with the Solomonoff prior has been "wrong" where Popper would be clearly "right" at any historical point would be good enough to argue about.

Comment author: curi 07 April 2011 01:47:06AM *  0 points [-]

statement of the form: "Based on Bayesian reasoning, you conclude that there is a 50% chance that a singularity will occur by 2060. This is a dangerous and wrong belief. By acting on it you will do damage I would not believe such a thing because of my improved epistemology.

Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

While Fallible Ideas does not comment on Bayesian Epistemology directly, it takes a different approach. You do not find Bayesians advocating the same ways of thinking. They have a different (worse, IMO) emphasis.

I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.

Fair, fair. I should have thought more and been less heated. (My initial response was even worse!)

Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.

Comment author: Desrtopa 07 April 2011 01:58:08AM 4 points [-]

Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference? You learned Bayesian stuff but it apparently didn't solve your problem, whereas my epistemology did solve mine.

It doesn't take Popperian epistemology to learn social fluency. I've learned to limit conflict and improve the productivity of my discussions, and I am (to the best of my ability) Bayesian in my epistemology.

If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.

Comment author: JoshuaZ 07 April 2011 02:07:05AM 2 points [-]

If you want to credit a particular skill to your epistemology, you should first see whether it's more likely to arise among those who share your epistemology than those who don't.

That's a claim that only makes sense in certain epistemological systems...

Comment author: curi 07 April 2011 02:09:13AM *  3 points [-]

I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.

Comment author: JoshuaZ 07 April 2011 02:36:12AM 0 points [-]

I don't have a problem with the main substance of that argument, which I agree with. Your implication that we would reject this idea is mistaken.

Hmm? I'm not sure who you mean by we? If you mean that someone supporting a Popperian approach to epistemology would probably find this idea reasonable than I agree with you (at least empirically, people claiming to support some form of Popperian approach seem ok with this sort of thing. That's not to say I understand how they think it is implied/ok in a Popperian framework).

Comment author: paulfchristiano 07 April 2011 02:09:39AM 3 points [-]

Using my epistemology I have learned not to do that kind of thing. Would that serve as an example of a practical benefit of it, and a substantive difference?

No. It provides an example of a way in which you are better than me. I am overwhelmingly confident that I can find ways in which I am better than you.

Do you assert that? It is wrong and has real world consequence. In The Beginning of Infinity Deutsch takes on a claim of a similar type (50% probability of humanity surviving the next century) using Popperian epistemology. You can find Deutsch explaining some of that material here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks

Could you explain how a Popperian disputes such an assertion? Through only my own fault, I can't listen to an mp3 right now.

My understanding is that anyone would make that argument in the same way: by providing evidence in the Bayesian sense, which would convince a Bayesian. What I am really asking for is a description of why your beliefs aren't the same as mine but better. Why is it that a Popperian disagrees with a Bayesian in this case? What argument do they accept that a Bayesian wouldn't? What is the corresponding calculation a Popperian does when he has to decide how to gamble with the lives of six billion people on an uncertain assertion?

I wonder if you think that all mathematically equivalent ways of thinking are equal. I believe they aren't because some are more convenient, some get to answers more directly, some make it harder to make mistakes, and so on. So even if my approach was compatible with the Bayesian approach, that wouldn't mean we agree or have nothing to discuss.

I agree that different ways of thinking can be better or worse even when they come to the same conclusions. You seem to be arguing that Bayesianism is wrong, which is a very different thing. At best, you seem to be claiming that trying to come up with probabilities is a bad idea. I don't yet understand exactly what you mean. Would you never take a bet? Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?

This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.

I do not plan to continue this discussion except in the pursuit of an example about which we could actually argue productively.

Comment author: curi 07 April 2011 02:46:51AM 0 points [-]

Could you explain how a Popperian disputes such an assertion? [(50% probability of humanity surviving the next century)]

e.g. by pointing out that whether we do or don't survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can't make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.

You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There's no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?

You seem to be arguing that Bayesianism is wrong, which is a very different thing.

I think it's wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes' theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don't want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.

Would you never take a bet?

Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it's not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?

Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?

We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.

This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.

Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).

Comment author: jake987722 07 April 2011 03:24:11AM 6 points [-]

So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?

Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you've described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.

First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don't actually make a decision and the problem remains unsolved.

Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That's fine, but then it's not clear that you've done anything different from what a Bayesian would have done--you've simply avoided explicitly talking about things like probabilities and priors.

Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?

Comment author: curi 07 April 2011 03:59:34AM 2 points [-]

When you have exactly one non-refuted theory, you go with that.

The other cases are more complicated and difficult to understand.

Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?

If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?

If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.

[option 1] since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other.

But some are criticized and some aren't.

[option 2] conjecture that best weathered the criticisms you were able to muster

But how is that to be judged?

No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).

Or is it something radically different from these two altogether?

Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.

Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.

Comment author: Larks 08 April 2011 01:03:10AM 0 points [-]

And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge.

You're equivocating between "knowing exactly the contents of the new knowledge", which may be impossible for the reason you describe, and "know some things about the effect of the new knowledge", which we can do. As Eliezer said, I may not know which move Kasparov will make, but I know he will win.

Comment author: timtyler 07 April 2011 12:48:52PM *  1 point [-]

what you're doing here is conflating Bayes' theorem (which is about probability, and which is a matter of logic, and which is correct) with Bayesian epistemology (the application of Bayes' theorem to epistemological problems, rather than to the math behind betting).

That's because to a Bayesian, these things are the same thing. Epistemology is all about probability - and visa versa. Bayes's theorem includes induction and confirmation. You can't accept Bayes's theorem and reject induction without crazy inconsistency - and Bayes's theorem is just the math of probability theory.

Comment author: [deleted] 07 April 2011 01:05:19PM 0 points [-]

If I understand correctly, I think curi is saying that there's no reason for probability and epistemology to be the same thing. That said, I don't entirely understand his/her argument in this thread, as some of the criticisms he/she mentions are vague. For example, what are these "epistemological problems" that Popper solves but Bayes doesn't?