This is a discussion page because I got the message "Comment too long". Apparently the same formatting magic doesn't work here for quotes :( It is a reply to:
http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3ulv
> > You can conjecture Bayes' theorem. You can also conjecture all the rest, however some things (such as induction, justificationism, foundationalism) contradict Popper's epistemology. So at least one of them has a mistake to fix. Fixing that may or may not lead to drastic changes, abandonment of the main ideas, etc
> Fully agreed. In principle, if Popper's epistemology is of the second, self-modifying type, there would be nothing wrong with drastic changes. One could argue that something like that is exactly how I arrived at my current beliefs, I wasn't born a Bayesian.
OK great.
If the changes were large enough, to important parts (for example if it lost the ability to self-modify) I wouldn't want to call it Popper's epistemology anymore (unless maybe the changes were made very gradually, with Popper's ideas being valued the whole time, and still valued at the end). It would be departing from his tradition too much, so it would be something else. A minor issue in some ways, but tradition matters.
> I can also see some ways to make induction and foundationalism easer to swallow.
> A discussion post sounds about right for this, if enough people like it you might consider moving it to the main site.
104 comments later it's at 0 karma. There is interest, but not so much liking. I don't think the main site is the right place for me ;-)
> > I think you are claiming that seeing a white swan is positive support for the assertion that all swans are white. (If not, please clarify).
> This is precisely what I am saying.
Based on what you say later, I'm not sure if you mean this in the same way I meant it. I meant: it is positive support for "all swans are white" *over* all theories which assert "all swans are black" (I disagree with that claim). If it doesn't support them *more than those other theories* then I regard it as vaccuous. I don't believe the math you offered meets this challenge over supporting "all swans are white" more than various opposites of it. I'm not sure if you intended it to.
> > If so, this gets into important issues. Popper disputed the idea of positive support. The criticism of the concept begins by considering: what is support? And in particular, what is the difference between "X supports Y" and "X is consistent with Y"?
> The beauty of Bayes is how it answers these questions. To distinguish between the two statements we express them each in terms of probabilities.
> "X is consistent with Y" is not really a Bayesian way of putting things, I can see two ways of interpreting it. One is as P(X&Y) > 0, meaning it is at least theoretically possible that both X and Y are true. The other is that P(X|Y) is reasonably large, i.e. that X is plausible if we assume Y.
Consistent means "doesn't contradict". It's the first one. Plausible is definitely not what I wanted.
> "X supports Y" means P(Y|X) > P(Y), X supports Y if and only if Y becomes more plausible when we learn of X. Bayes tells us that this is equivalent to P(X|Y) > P(X), i.e. if Y would suggest that X is more likely that we might think otherwise then X is support of Y.
This is true but fairly vaccous, in my view. I don't want to argue over what counts as significant. If you like it, shrug. It is important that, e.g., we reject ideas refuted by evidence. But I don't think this addresses the major problems in epistemology which come after we decide to reject things which are refuted by evidence.
The reason it doesn't is there's always infinitely many things supported by any evidence, in this sense. Infinitely many things which make wildly different predictions about the future, but identical predictions about whatever our evidence covers. If Y is 10 white swans, and X is "all swans are white" then X is supported, by your statement. But also supported are infinitely many different theories claiming that all swans are black, and that you hallucinated. You saw exactly what you would see if any of those theories were true, so they get as much support as anything else. There is nothing (in the concept of support) to differentiate between "all swans are white" and those other theories.
If you do add something else to differentiate, I will say the support concept is useless. The new thing does all the work. And further, the support concept is frequently abused. I have had people tell me that "all swans are black, but tomorrow you will hallucinated 10 white swans" is supported less by seeing 10 white swans tomorrow than "all swans are white" is, even though they made identical predictions (and asserted them with 100% probability, and would both have been definitely refuted by anything else). That kind of stuff is just wrong. I don't know if you think that kind of thing or not. What you said here does clearly disown it, nor advocate it. But that's the kind of thing that concerns me.
> Suppose we make X the statement "the first swan I see today is white" and Y the statement "all swans are white". P(X|Y) is very close to 1, P(X|~Y) is less than 1 so P(X|Y) > P(X), so seeing a white swan offers support for the view that all swans are white. Very, very weak support, but support nonetheless.
The problem I have is that it's not supported over infinitely many rivals. So how is that really support? It's useless. The only stuff not being supported is that which contradicts the evidence (like, literally contradicts, with no hallucination claims. e.g. a theory that predicts you will think you saw a green swan tomororw. but then you don't, just the white ones. that one is refuted). The inconsistent theories are refuted. The theories which make probabalistic predictions are partially supported. And the theories that say "screw probability, 100% every time" for all predictions get maximally supported, and between them support does not differentiate. (BTW I think it's ironic that I score better on support when I just stick 100% in front of every prediction in all theories I mention, while you score lower by putting in other numbers, and so your support concept discourages ever making predictions with under 100% confidence).
> (The above is not meant to be condescending, I apologise if you know all of it already).
It is not condescending. I think (following Popper) that explaining things is important and that nothing is obvious, and that communication is difficult enough without people refusing go over the "basics" in order to better understand each other. Of course this is a case where Popper's idea is not unique. Other people have said similar. But this idea, and others, are integrated into his epistemology closely. There's also *far more detail and precision* available, to explain *why* this stuff is true (e.g. lengthy theories about the nature of communication, also integrated into his epistemology). I don't think ideas about interpretting people's writing in kind ways, and miscommunication being a major hurdle, are so closely integrated with Bayesian approaches with are more math focussed and don't integrate so nicely with explanations.
My reply about support is basic stuff too, to my eye. But maybe not yours. I don't know. I expect not, since if it was you could have addressed it in advance. Oh well. It doesn't matter. Reply as you will. No doubt I'm also failing to address in advance something you regard as important.
> > To show they are correct. Popper's epistemology is different: ideas never have any positive support, confirmation, verification, justification, high probability, etc...
> This is a very tough bullet to bite.
Yes it is tough. Because this stuff has been integral to the Western philosophy tradition since Aristotle until Popper. That's a long time. It became common sense, intuitive, etc...
> > How do we decide which idea is better than the others? We can differentiate ideas by criticism. When we see a mistake in an idea, we criticize it (criticism = explaining a mistake/flaw). That refutes the idea. We should act on or use non-refuted ideas in preference over refuted ideas.
> One thing I don't like about this is the whole 'one strike and you're out' feel of it. It's very boolean,
Hmm. FYI that is my emphasis more than Popper's. I think it simplifies the theory a bit to regard all changes to theories as new theories. Keep in mind you can always invent a new theory with one thing changed. So the ways it matters have some limits, it's party just a terminology thing (terminolgoy has meaning, and some is better than others. Mine is chosen with Popperian considerations in mind. A lot of Popper's is chosen with considerations in mind of talking with his critics). Popper sometimes emphasized that it's important not to give up on theories too easily, but to look for ways to improve them when they are criticized. I agree with that. So, the "one strike you're out" way of expressing this is misleading, and isn't *substantially* implied in my statements (b/c of the possibility of creating new and similar theories). Other terminologies have different problems.
> the real world isn't usually so crisp. Even a correct theory will sometimes have some evidence pointing against it, and in policy debates almost every suggestion will have some kind of downside.
This is a substantive, not terminological, disagreement, I believe. I think it's one of the *advantages* of my terminology that it helped highlight this disagreement.
Note the idea that evidence "points" is the support idea.
In the Popperian scheme of things, evidence does not point. It contradicts, or it doesn't (given some interpretation and explanation, which are often more important than the evidence itself). That's it. Evidence can thus be used in criticisms, but is not itself inherently a criticism or argument.
So let me rephrase what you were saying. "Even a correct theory will sometimes have critical arguments against it".
Part of the Popperian view is that if an idea has one false aspect, it is false. There is a sense in which any flaw must be decisive. We can't just go around admitting mistakes into our ideas on purpose.
One way to explain the issue is: for each criticism, consider it. Judge if it's right or wrong. Do your best and act on the consequence. If you think the criticism is correct, you absolutely must reject the idea it criticizes. If you don't, then you can regard the theory as not having any *true* critical arguments against it, so that's fine.
When you reject an idea for having one false part, you can try to form a new theory to rescue the parts you still value. This runs into dangers of arbitrarily rescuing everything in an ad hoc way. There's two answers to that. The first is: who cares? Popperian epistemology is not about laying out rules to prevent you from thinking badly. It's about offering advice to help you think better. We don't really care very much if you find a way to game the system and do something dumb, such as making a series of 200 ad hoc and silly arguments to try to defend a theory you are attached to. All we'll do is criticize you for it. And we think that is good enough: there are criticisms of bad methodologies, but no formal rules that definitively ban them. Now the second answer, which Deutsch presents in The Fabric of Reality, is that when you modify theories you often ruin their explanation. If you don't, then the modification is OK, it's good to consider this new theory, it's worth considering. But if the explanation is ruined, that puts an end to trying to rescue it (unless you can come up with a good idea for a new way to modify it that wont' ruin the explanation).
This concept of ruining explanations is important and not simple. Reading the book would be great (it is polished! edited!) but I'll try to explain it briefly. This example is actually from his other book, _The Beginning of Infinity_ chapter 1. We'll start with a bad theory: the seasons are caused by Persephone's imprisonment, for 6 months of the year, in the underworld (via her mother Demeter's magic powers which she uses to express her emotions). This theory has a bad explanation in the first place, so it can be easily rescued when it's emprically contradicted. For example this theory predicts the seasons will be the same all over the globe, at the same time. That's false. But you can modify the theory very easily to account for the empirical data. You can say that Demeter only cares about the area where she lives. She makes it cold when Persephone is gone, and hot when she's present. The cold or hot has to go somewhere, so she puts it far away. So, the theory is saved by an ad hoc modification. It's no worse than before. Its substantive content was "Demeter's emotions and magic account for the seasons". And when the facts change, that explanation remains in tact. This is a warning against bad explanations (which can be criticized directly for being bad explanations, so there's no big problem here).
But when you have a good explanation, such as the real explanation for the seasons, based on the Earth orbitting the sun, and the axis being tilted, and so on, ad hoc modifications cause bigger problems. Suppose we found out the seasons are the same all around the world at the same time. That would refute the axis tilt theory of seasons. You could try to save it, but it's hard. If you added magic you would be ruining the axis tilt *explantion* and resorting to a very different explanation. I can't think of any way to save the axis tilt theory from the observation that the whole world has the same seasons as the same time, without contradicting or replacing its explanation. So that's why ad hoc modifications sometimes fail (for good explanatory theories only). In the cases where there is not a failure of this type -- if there is a way to keep a good explanation and still account for new data -- then that new theory is genuinely worth consideration (and if there is some thing wrong with it, you can criticize it).
> There is also the worry that there could be more than one non-refuted idea, which makes it a bit difficult to make decisions.
Yes I know. This is an important problem. I regard it as solved. For discussion of this problem, go to:
http://lesswrong.com/r/discussion/lw/551/popperian_decision_making/
> Bayesianism, on the other hand, when combined with expected utility theory, is perfect for making decisions.
Bayesianism works when you assume a bunch of stuff (e.g. some evidence), and you set up a clean example, and you choose an issue it's good at handling. I don't think it is very helpful in a lot of real world cases. Certaintly it helps in some. I regard Bayes' theorem itself as "how not to get probability wrong". That matters to a good amount of stuff. But hard real world scenarios usually have rival explanations of the proper interpretation of the available evidence, they have fallible evidence that is in doubt, they have often many different arguments that are hard to assign any numbers to, and so on. Using solomonoff induction is assign numbers, for example, doesn't work in practice as far as i know (e.g. people don't actually compute the numbers for dozens of political arugments using it). Another assumption being made is *what is a desirable (high utility) outcome* -- Bayesianism doesn't help you figure that out, it just lets you assume it (I see that as entrenching bias and subjectivism in reagards to morality -- we *can* make objective criticisms of moral values).
What I'm meaning to point out is the absence of something with a title along the lines of "Bayesian epistemology" that explains Bayesian epistemology specifically.
What LW has is "here's the theorem" and "everything works by this theorem", without the second one being explained in coherent detail. I mean, Bayes structure is everywhere. But just noting that is not an explanation.
There's potential here for an enormously popular front-page post that gets linked all over the net forever, because the SEP article is so awful ...