Branching from: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3uta?context=4

The question is: how do you make decisions without justifying decisions, and without foundations?

If you can do that, I claim the regress problem is solved. Whereas induction, for example, is refuted by the regress problem (no, arbitrary foundations or circular arguments are not solutions).

OK stepping back a bit, and explaining less briefly:

 

Infinite regresses are nasty problems for epistemologies.

All justificationist epistemologies have an infinite regress.

That means they are false. They don't work. End of story.

There's options of course. Don't want a regress? No problem. Have an arbitrary foundation. Have an unjustified proposition. Have a circular argument. Or have something else even sillier.

The regress goes like this, and the details of the justification don't matter.

If you want to justify a theory, T0, you have to justify it with another theory, T1. Then T1 needs justify by T2. Which needs justifying by T3. Forever. And if T25 turns out wrong, then T24 loses it's justification. And with T24 unjustified, T23 loses its justification. And it cascades all the way back to the start.

I'll give one more example. Consider probabilistic justification. You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we have to wonder if that 99.999% figure is correct. If it's not correct then it could be anything such at 90% or 1% or whatever. So it better be correct. So we better justify that it's a good theory. How? Simple. We'll use our whim to assign it a probability of 99.99999%. OK! Now we're getting somewhere. I put a lot of 9s so we're almost certain to be correct! Except, what if I had that figure wrong? If it's wrong it could be anything such as 2% or 0.0001%. Uh oh. I better justify my second probability estimate. How? Well we're trying to defend this probabilistic justification method. Let's not give up yet and do something totally differently, instead we'll give it another probability. How about 80%? OK! Next I ask: is that 80% figure correct? If it's not correct, the probability could be anything, such as 5%. So we better justify it. So it goes on and on forever. Now there's two problems. First it goes forever, and you can't ever stop, you've got an infinite regress. Second, suppose you stopped have some very large but finite number of steps. Then the probability the first theory is correct is arbitrarily small. Because remember that at each step we didn't even have a guarantee, only a high probability. And if you roll the dice a lot of times, even with very good odds, eventually you lose. And you only have to lose once for the whole thing to fail.

OK so regresses are a nasty problem. They totally ruin all justificationist epistemologies. That's basically every epistemology anyone cares about except skepticism and Popperian epistemology. And forget about skepticism, that's more of an anti-epistemology than an epistemology: skepticism consists of giving up on knowledge.

Now we'll take a look at Popper and Deutsch's solution. In my words, with minor improvements.

Regresses all go away if we drop justification. Don't justify anything, ever. Simple.

But justification had a purpose.

The purpose of justification is to sort out good ideas from bad ideas. How do we know which ideas are any good? Which should we believe are true? Which should we act on?

BTW that's the same general problem that induction was trying to address. And induction is false. So that's another reason we need a solution to this issue.

The method of addressing this issue has several steps, so try to follow along.

Step 1) You can suggest any ideas you want. There's no rules, just anything you have the slightest suspicion might be useful. The source of the ideas, and the method of coming up with them, doesn't matter to anything. This part is easy.

Step 2) You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, *or* it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too.

Step 3) All criticized ideas are rejected. They're flawed. They're not good enough. Let's do better. This is easy too. Only the *exact* ideas criticized are rejected. Any idea with at least one difference is deemed a new idea. It's OK to suggest new ideas which are similar to old ideas (in fact it's a good idea: when you find something wrong with an idea you should try to work out a way to change it so it won't have that flaw anymore).

Step 4) If we have exactly one idea remaining to address some problem or question, and no one wants to revisit the previous steps at this time, then we're done for now (you can always change your mind and go back to the previous steps later if you want to). Use that idea. Why? Because it's the only one. It has no rivals, no known alternatives. It stands alone as the only non-refuted idea. We have sorted out the good ideas from the bad -- as best we know how -- and come to a definite answer, so use that answer. This step is easy too!

Step 5) What if we have a different number of ideas left over which is not exactly one? We'll divide that into two cases:

Case 1) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.

Case 2) What if we have zero ideas? This is crucial because case one always turns into this! The answer comes in two main parts. The first part is: think of more ideas. I know, I know, that sounds hard. What if you get stuck? But the second part makes it easier. And you can use the second part over and over and it keeps making it easier every time. So you just use the second part until it's easy enough, then you think of more ideas when you can. And that's all there is to it.

OK so the second part is this: be less ambitious. You might worry: but what about advanced science with its cutting edge breakthroughs? Well, this part is optional. If you can wait for an answer, don't do it. If there's no hurry, then work on the other steps more. Make more guesses and think of more criticisms and thus learn more and improve your knowledge. It might not be easy, but hey, the problem we were looking at is how to sort out good ideas from bad ideas. If you want to solve hard problems then it's not easy. Sorry. But you've got a method, just keep at it.

But if you have a decision to make then you need an answer now so you can make your decision. So in that case, if you actually want to reach a state of having exactly one theory which you can use now, then the trick is when you get stuck be less ambitious. I think how you can see how that would work in general terms. Basically if human knowledge isn't good enough to give you an answer of a certain quality right now, then your choices are either to work on it more and not have an answer now, or accept a lower quality answer. You can see why there isn't really any way around that. There's no magic way to always get a top quality answer now. If you want a cure for cancer, well I can't tell you how to come up with one in the next five minutes, sorry.

This is a bit vague so far. How does lowering your standards address the problem. So what you do is propose a new idea like this, "I need to do something, so I will do..." and then you put whatever you want (idea A, idea B, some combination, whatever).

This new idea is not refuted by any of the existing criticisms. So now you have one idea, it isn't refuted, and you might be done. If you're happy with it, great. But you might not be. Maybe you see something wrong with it, or you have another proposal. That's fine; just go back to the first three steps and do them more. Then you'll get to step 4 or 5 again.

What if we get back here? What do we do the second time? The third time? We simply get less ambitious each time. The harder a time we're having, the less we should expect. And so we can start criticizing any ideas that aim too high.

BTW it's explained on my website here, including an example:

http://fallibleideas.com/avoiding-coercion

Read that essay, keeping in mind what what I've been saying, and hopefully everything will click. Just bear in mind that when it talks about cooperation between people, and disagreements between people, and coming up with solutions for people -- when it discusses ideas in two or more separate minds -- everything applies exactly the same if the two or more conflicting ideas are all in the same mind.

What if you get real stuck? Well why not do the first thing that pops into your head? You don't want to? Why not? Got a criticism of it? It's better than nothing, right? No? If it's not better than nothing, do nothing! You think it's silly or dumb? Well so what? If it's the best idea you have then it doesn't matter if it's dumb. You can't magically instantly become super smart. You have to use your best idea even if you'd like to have better ideas.

Now you may be wondering whether this approach is truth-seeking. It is, but it doesn't always find the truth immediately. If you want a resolution to a question immediately then its quality cannot exceed today's knowledge (plus whatever you can learn in the time allotted). It can't do better than the best that is known how to do. But as far as long term progress, the truth seeking came in those first three steps. You come up with ideas. You criticize those ideas. Thereby you eliminate flaws. Every time you find a mistake and point it out you are making progress towards the truth. That's how we approach the truth: not by justifying but by identify mistakes and learning better. This is evolution, it's the solution to Paley's problem, it's discussed in BoI and on my Fallible Ideas website. And it's not too hard to understand: improve stuff, keep at it, and you get closer to the truth. Mistake correcting -- criticism -- is a truth-seeking method. That's where the truth-seeking comes from.

 

New Comment
101 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am very inexperienced in epistemology, so forgive me if I'm making a simple error.

But it sounds like everything important in your theory is stuck into a black box in the words "criticize the idea".

Suppose we had a computer program designed to print the words "I like this idea" to any idea represented as a string with exactly 5 instances of the letter 'e' in it, and the words "I dislike this idea because it has the wrong number of 'e's in it" to any other idea.

And suppose we had a second computer program designed to print "I like this idea" to any idea printed on blue paper, and "I dislike this idea because it is on the wrong color paper" to any idea printed on any other color of paper.

These two computers could run through your decision making process of generating and criticizing ideas, and eventually would settle on the first idea generated which was written on blue paper and which used the letter 'e' exactly five times.

So it would seem that for this process to capture what we mean by "truth", you have to start out with some reasoners who already have a pretty good set of internal reasoning processes kind of like our ow... (read more)

Regarding the technical side of your post, if a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X. (If you don't trust your own transistors, add error-correction at the level of transistors, don't contaminate the software.) But it's hard to think of a situation where the program will need to make use of the latter probability.

Regarding the substance, I think you disagree with popular opinion on LW because there are two possible meanings of "epistemology":

1) If a human wants to have rational beliefs, what rulebook should they follow?

2) If we want to write a computer program that arrives at rational beliefs, what algorithms should we use?

From your posts and comments it looks like you're promoting Popperianism as an answer to (1). The problem is, it's pretty hard to determine whether a given answer to (1) is right, wrong or meaningless, when it's composed of mere words (cognitive black boxes) and doesn't automatically translate to an answer for (2). So most LWers think that (2) is really the right question to ask, and any non-confused answer to (2)... (read more)

-2curi
I don't agree with either meaning of epistemology. The traditional meaning of epistemology, which I accept, is the study of knowledge, and in particular questions like What is knowledge? and How do we sort out good ideas from bad ideas? and How is knowledge created? Both of your definitions of the field have bayesian ways of thinking already built in to them. They are biased. If you don't want to be an epistemology, that would be OK with me. But for example Yudkowsky claimed that Bayesianism was dethroning Popperism. To do that it has to be an epistemology and deal with the same questions Popper addresses. Popperian epistemology does not offer any rulebook. It says rulebooks are an authoritarian and foundationalist mistake, which comes out of the attempt to find a source of justification. (Well, the psychological claims are not important and not epistemology. But Popper did occasionally say things like that, and I think it's true) I will take a look at your links, thanks. I respect that author a lot for this post on why heritability studies are wrong: http://cscs.umich.edu/~crshalizi/weblog/520.html Note that Popperians think there is no algorithm that automatically arrives at rational beliefs. There's no privileged road to truth. AIs will not be more rational than people. OK they usually won't have a few uniquely human flaws (like, umm, caring if they are fat). But there is no particular reason to expect this stuff will be replaced with correct ideas. Whatever AIs think of instead will have its own mistakes. It's the same kind of issue as if some children were left on a deserted island to form their own culture. They'll avoid various mistakes from our culture, but they will also make new ones. The rationality of AIs, just like the rationality of the next generation, depends primarily on the rationality of the educational techniques used (education is closely connected to epistemology in my view, because it's about learning, i.e. creating knowledge. Popperian

I''m willing to reformulate like this:

1) How can a human sort out good ideas from bad ideas?

2) How can a computer program sort out good ideas from bad ideas?

and the subsequent paragraph can stay unchanged. Whatever recipe you're proposing to improve human understanding, it ought to be "reductionist" and apply to programs too, otherwise it doesn't meet the LW standard. Whether AIs can be more rational than people is beside the point.

0curi
I don't think you understood the word "reductionist". Reductionism doesn't mean that things can be reduced to lower levels but that they should -- it actually objections to high level statements and considers them worse. There's no need for reductionism of that kind for ideas to be applicable to low level issues like being programmable. Yes Popperian epistemology can be used for an AI with the reformulations (at least: I don't know any argument that it couldn't). Why aren't we there yet? There aren't a lot of Popperians, Popperian philosophy does not seek to be formal which makes it harder to translate into code, and most effort has been directed at human problems (including criticizing large mistakes plaguing the field of philosophy, and which also affect regular people and permeate our culture). The epistemology problems important to humans are not all the same as the ones important to writing an AI. For an AI you need to worry about what information to start it with. Humans are born with information, we don't yet have the science to control that, so there's is only limited reason to worry about it. Similarly there is the issue of how to educate a very young child. No one knows the answer to that in words -- they can do it by following cultural traditions but they can't explain it. But for AIs, how to deal with the very young stages is important. Broadly an AI will need a conjecture generator, a criticism generator, and a criticism evaluator. Humans have these built in. So again the problems for AI are somewhat different than what's important for, e.g., explaining epistemology to human adults. You may think the details of these things in humans are crucially important. The reason they aren't is that they are universal, so implementation details don't affect anything much about our lives. It's still interesting to think about. I do sometimes. I'll try to present a few issues. In abstract terms we would be content with a random conjecture generator, and with so
9timtyler
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
0curi
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it's a silly too. Concluding that people aren't experts because you disagree with them is biased and closed minded.) Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
5timtyler
Conventionally, and confusingly, the word reductionism has two meanings:
1cousin_it
I didn't say it was false, just irrelevant to the current discussion of what we want from a theory of knowledge. You could use math instead of code. To take a Bayesian example, the Solomonoff prior is uncomputable, but well-defined mathematically and you can write computable approximations to it, so it counts as progress in my book. To take a non-Bayesian example, fuzzy logic is formalized enough to be useful in applications. Anyway, I think I understand where you're coming from, and maybe it's unfair to demand new LW-style insights from you. But hopefully you also understand why we like Bayesianism, and that we don't even think of it at the level you're discussing.
0curi
I understand some. But I think you're mistaken and I don't see a lot to like when judged by the standards of good philosophy. Philosophy is important. Your projects, like inventing an AI, will run into obstacles you did not foresee if your philosophy is mistaken. Of course I have the same criticism about people in all sorts of other fields. Architects or physicists or economists who don't know philosophy run into problems too. But claiming to have an epistemology, and claiming to replace Popper, those are things most fields don't do. So I try to ask about it. Shrug. I think I figured out the main idea of Bayesian epistemology. It is: Bayes' theorem is the source of justification (this is intended as the solution to the problem of justification, which is a bad problem). But when you start doing math, it's ignored, and you get stuff right (at least given the premises, which are often not realistic, following the proud tradition of game theory and economics). So I should clarify: that's the main philosophical claim. It's not very interesting. Oh well.
3[anonymous]
No. See here, where Eliezer specifically says that this is not the case. ("But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.")
0curi
I had already seen that. Note that I said justification not induction. I don't want to argue about this. If you like the idea, enjoy it. If you don't, just forget about it and reply to something else I said.
4[anonymous]
This is mostly irrelevant to your main point, but I'm going to talk about it because it bothered me. I don't think anyone on LessWrong would agree with this paragraph, since it assumes a whole bunch of things about AI that we have good reasons to not assume. The rationality of an AI will depend on its mind design--whether it has biases built into its hardware or not us up to us. In other words, you can't assert that AIs will make their own mistakes because this assumes things about the mind design of the AI, things that we can't assume because we haven't built it yet. Also, even if an AI does have its own cognitive biases, it still might be orders of magnitude more rational than a human being.
2curi
I'm not assuming stuff by accident. There is serious theory for this. AI people ought to learn these ideas and engage with them, IMO, since they contradict some of your ideas. If we're right, then you need to make some changes to how you approach AI design. So for example: If an AI is a universal knowledge creator, in what sense can it have a built in bias?
3timtyler
Astrology also conflicts with "our ideas". That is not in itself a compelling reason to brush up on our astrology.
0[anonymous]
I don't understand this sentence. Let me make my view of things clearer: An AI's mind can be described by a point in mind design space. Certain minds (most of them, I imagine) have cognitive biases built into their hardware. That is, they function in suboptimal ways because of the algorithms and heuristics they use. For example: human beings. That said, what is a "universal knowledge creator?" Or, to frame the question in the terms I just gave, what is its mind design?
2curi
That's not what mind design space looks like. It looks something like this: You have a bunch of stuff that isn't a mind at all. It's simple and it's not there yet. Then you have a bunch of stuff that is a fully complete mind capable of anything that any mind can do. There's also some special cases (you could have a very long program that hard codes how to deal with every possible input, situation or idea). AIs we create won't be special cases of that type which are a bad kind of design. This is similar to the computer design space, which has no half-computers. A knowledge creator can create knowledge in some repertoire/set. A universal can do any knowledge creation that any other knowledge creator can do. There is nothing in the repertoire of some other knowledge creator, but not its own. Human beings are universal knowledge creators. Are you familiar with universality of computers? And how very simple computers can be universal? There's a lot of parallel issues.
1[anonymous]
I'm somewhat skeptical of this claim--I can design a mind that has the functions 0(n) (zero function), S(n) (successor function), and P(x0, x1,...xn) (projection function) but not primitive recursion, I can compute most but not all functions. So I'm skeptical of this "all or little" description of mind space and computer space. However, I suspect it ultimately doesn't matter because your claims don't directly contradict my original point. If your categorization is correct and human beings are indeed universal knowledge creators, that doesn't preclude the possibility of us having cognitive biases (which it had better not do!). Nor does it contradict the larger point, which is that cognitive biases come from cognitive architecture, i.e. where one is located in mind design space. If you're referring to Turing-completeness, then yes I am familiar with it.
4curi
How is that a mind? Maybe we are defining it differently. A mind is something that can create knowledge. And a lot, not just a few special cases. Like people who can think about all kinds of topics such as engineering or art. When you give a few simple functions and don't even have recursion, I don't think it meets my conception of a mind, and I'm not sure what good it is. In what sense can a bias be very important (in the long term), if we are universal? We can change it. We can learn better. So the implementation details aren't such a big deal to the result, you get the same kind of thing regardless. Temporary mistakes in starting points should be expected. Thinking needs to be mistake tolerant.
0JoshuaZ
Or orders of magnitude less rational. This isn't terribly germane to your original point but it seemed worth pointing out. We really have no good idea what the minimum amount of rationality actually is for an intelligent entity.
0[anonymous]
Oh, I definitely agree with that. It's certainly possible to conceive of a really, really, really suboptimal mind that is still "intelligent" in the sense that it can attempt to solve problems.
-7timtyler

I'm not all that sure that this is going anywhere helpful, but since curi has asked for objections to Critical Rationalism, I might as well make mine known.

My first objection is that it attempts to "solve" the problem of induction by doing away with an axiom. Yes, if you try to prove induction, you get circularity or infinite regress. That's what happens when you attempt to prove axioms. The Problem of Induction essentially amounts to noticing that we have axioms on which inductive reasoning rests.

Popperian reasoning could be similarly "ref... (read more)

2curi
Proposed axioms can be mistakes. Do we need that axiom? Popper says the argument that we need it is mistaken. That could be an important and valid insight if true, right? You are applying foundationalist and justificationist criticism to a philosophy which has, as one of its big ideas, that those ways of thinking are mistaken. That is not a good answer to Popper's system. No, it's worse than that. Try specifying the axiom. How are ideas justified? If the answer you give is "they are justified by other ideas, which themselves must be justified" then that isn't merely not proven but wrong. That doesn't work. If the answer you give is, "they are justified by other ideas which are themselves justified, or the following set of foundational ideas ..." then the problem is not merely that you can't prove your foundations are correct, but that this method of thinking is anti-critical and not sufficiently fallibilist. Fallibilism teaches that people make mistakes. A lot. It is thus a bad idea to try to find the perfect truth and set it up to rule your thinking, as the unquestionable foundations. You will set up mistaken ideas in the role of foundations, and that will be bad. What we should do instead is to try set up institutions which are good at (for example): 1) making errors easy to find and question -- highlighting rather than hiding them 2) making ideas easy to change when we find errors Popper's solution to the problem of justification isn't merely dropping an axiom but involves advocating an approach along these lines (with far more detail than I can type in). It's substantive. And rejecting it on the grounds that "you can't prove it" or "i refuse to accept it" is silly. You should give a proper argument, or reconsider rejecting it. It is non-obvious how it doesn't. There is a legitimate issue here. But the problem can be and is dealt with. I do not have this problem in practice, and I know of no argument that I really have it in theory. You give Wittgenstein
3Desrtopa
Edit: I wrote a response up, but I deleted it because I think this is getting too confrontational to be useful. I have plenty of standing objections to Critical Rationalism, but I don't think I can pose them without creating an attitude too adversarial to be conducive to changing either of our minds. I hate to be the one to start bringing this in again, but I think perhaps if you want to continue this discussion, you should read the Sequences, at which point you should hopefully have some understanding of how we are convinced that Bayesianism improves people's information processing and decisionmaking from a practical standpoint. I will be similarly open to any explanations of how you feel Critical Rationalism improves these things (let me be very clear that I'm not asking for more examples of people who approved of Popper or things you feel Critical Rationalism can take credit for, show me how people who can only be narrowly interpreted as Popperian outperform people who are not Popperian.) I have standing objections to Popper's critiques of induction, but this is what I actually care about and am amenable to changing my mind on the basis of.
-2curi
The reason I'm not very interested in carefully reading your Sequences is that I feel they miss the point and aren't useful (that is, useful to philosophy. lots of your math is nice). In my discussions here, I have not found any reason to think otherwise. Show it how? I can conjecture it. Got a criticism?
1Desrtopa
Responses to criticisms are not interesting to me; proponents of any philosophy can respond to criticisms in ways that they are convinced are satisfying, and I'm not impressed that supporters of Critical Rationalism are doing a better job. If you cannot yourself come up with a convincing way to demonstrate that Critical Rationalism results in improved success in ways that supporters of other philosophies cannot, why should I take it seriously?
2curi
What would you find convincing? What convinced you of Bayes' or whatever you believe?
2Desrtopa
Examples of mistakes in processing evidence people make in real life which lead to bad results, and how Bayesian reasoning resolves them, followed by concrete applications such as the review of the Amanda Knox trial. Have you already looked at the review of the Amanda Knox trial? If you haven't, it might be a useful point for us to examine. It doesn't help anyone to point out an example of inductive reasoning, say "this is a mistake" because you reject the foundations of inductive reasoning, but not demonstrate how rejecting it leads to better results than accepting it. So far the examples you have given of the supposed benefits of Critical Rationalism have been achievements of people who can only be loosely associated with Critical Rationalism, or arguments in a frame of Critical Rationalism for things that have already been argued for outside a frame of Critical Rationalism.
-11curi
0Peterdjones
Which exposes one of the problems with Popperianism..it leads to the burden of proof being shifted to the wrong place. The burden should be with who proposes a claim, or whoever makes the most extraordinary claim. Popperianism turns it into a burden of disproof on the refuter. All you have to do is "get in first" with your conjecture, and you can sit back and relax. "Why, you have to show Barack Obama is NOT an alien".
-2curi
In Popperism, there is no burden of proof. How did you find me here, Peter?
0JoshuaZ
I'm replying a second time to this remark because thinking about it more it illustrates a major problem you are having. You are using a specific set of epistemological tools and notation when that is one of the things that is in fact in dispute. That's unproductive and is going to get people annoyed. It is all the more severe because many of these situations are cases where the specific epistemology doesn't even matter. For example, the claim under discussion is the claim that " people who can only be narrowly interpreted as Popperian outperform people who are not Popperian" That's something that could be tested regardless of epistemology. To use a similar example, if someone is arguing for Christianity and they claim that Christians have a longer lifespan on average then I don't need to go into a detailed discussion about epistemology to examine the data. If I read a paper, in whatever branch of science, I could be a Popperian, a Bayesian, completely uncommitted, or something else, and still read the paper and come to essentially the same results. Trying to discuss claims using exactly the framework in question is at best unproductive, and is in general unhelpful.
0curi
Sure I am. But so are you! We can't help but use epistemological tools. I use the ones I regard as actually working. As do you. I'm not sure what you're suggesting I do instead. If you want me to recognize what I'm doing, I do. If you want me to consider other toolsets, I have. In depth. (Note, btw, that I am here voluntarily choosing to learn more about how Bayesians think. I like to visits various communities. Simultaneously I'm talking to Objectivists (and reading their books) who have a different way of approach epistemology.) The primary reason some people have accused me (and Brian) of not understanding Bayesian views and other views not our own isn't our unfamiliarity but because we disagree and choose to think differently than them and not to accept or go along with various ideas. When people do stuff like link me to http://yudkowsky.net/rational/bayes it is their mistake to think I haven't read it. I have. They think if I read it I would change my mind; they are just plain empirically wrong; I did read it and did not change my mind. They ought to learn something from their mistake, such as that their literature is less convincing than they think it is. On the other hand, no one here has noticeably familiar with Popper. And no one has pointed me to any rigorous criticism of Popper by any Bayesian. Nor any rigorous rebuttal to Popper's criticisms of Bayesianism (especially the most important ones, that is the philosophical not mathematical ones). The situation is that Popper read up on Bayesian stuff, and many other ideas, engaged with and criticized other ideas, formulated his own ideas, rebutted criticisms of his views, and so on. That is all good stuff. Bayesians do it some too. But they haven't done it with Popper; they've chosen to disregard him based on things like his reputation, and misleading summaries of his work. At best some people here have read his first book, which is not the right one to start with if you want to understand what he's about
0hairyfigment
Tell us what you think your tool does better, in some area where we see a problem with Bayes. (And I do mean Bayes' Theorem as a whole, not a part of it taken out of context.) Seems like any process that leads to harmful priors can also produce a criticism of your position as you've explained it so far. As I mentioned before, the Consistent Gambler's Fallacy would lead us to criticize any theory that has worked in the past.
0JoshuaZ
Yes. There's no reason to conjecture this other than your own personal preference. I could conjecture that people with red hair perform better and that would have almost as much basis. (The mere act of asserting a hypothesis is not a reason to take it seriously.)
[-][anonymous]50

You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, or it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too Step 3) All criticized ideas are rejected. They're flawed. They're not good enough.

tl;dr

I've just Criticised you... (read more)

2curi
Your criticism is generic. It would work on all ideas equally well. It thus fails to differentiate between ideas or highlight a flaw in the sense of something which could possibly be improved on. So, now that I've criticized your criticism (and the entire category of criticisms like it), we can reject it and move on.
3[anonymous]
My criticism is not generic. It would not work on an idea which consisted of cute cat pictures. Therefore, your criticism of my criticism does not apply. I can continue providing specious counter-counter-...counter-criticisms until the cows come home. I don't see how your scheme lets sensible ideas get in edgeways against that sort of thing. Anyhow, criticism of criticisms wasn't in your original method.
2curi
If you understand they are specious, then you have a criticism of it. Criticisms are themselves ideas/conjectures and should themselves be criticized. And I'm not saying this ad hoc, I had this idea before posting here.
0[anonymous]
I understand they are specious, but I'm not using your epistemology to determine that. What basis do you have for saying that they are specious?
1curi
It doesn't engage with the substance of my idea. It does not explain what it regards as a flaw in the idea. Unless you meant the tl;dr as your generic criticism and the flaw you are trying to explain is that all good idea should be short and simple. Do you want me to criticize that? :-)
3[anonymous]
What I'm trying to get at is: By your system, the idea to be accepted is the one without an uncountered criticism. What matters isn't any external standard of whether the criticism is good or bad, just whether it has been countered. But any criticism, good or bad, can be countered by a (probably bad) criticism, so your system doesn't offer a way to distinguish between good criticism and bad criticism.
2curi
You have to conjecture standards of criticism (or start with cultural ones). Then improve them by criticism, and perhaps by conjecturing new standards. If you want to discuss some specific idea, say gardening, you can't discuss only gardening in a very isolated way. You'll need to at least implicitly refer to a lot of background knowledge, including standards of criticism. One way this differs from foundations is if you think a standard of criticism reaches the wrong conclusion about gardening, you can argue from your knowledge of gardening backwards (as some would see it) to criticize the standard of criticism for getting a wrong answer.
2[anonymous]
How can you expect that criticizing your standards of criticism will be productive if you don't have a good standard of criticism in the first place?
1curi
Many starting points work fine. In theory, could you get stuck? I don't have a proof either way. I don't mind too much. Humans already have standards of criticism which don't get stuck. We have made scientific progress. Our standards we already have allow self-modifiaction and thereby unbounded progress. So it doesn't matter what would have happened if we had started with a bad standard once a upon a time, we're past that (it does matter if we want to create an AI).
2[anonymous]
You would definitely get stuck. The problem Khoth pointed out is that your method can't distinguish between good criticism and bad criticism. Thus, you could criticize any standard that you come up with, but you'd have know way of knowing which criticisms are legitimate, so you wouldn't know which standards are better than others. I agree that in practice we don't get stuck, but that's because we don't use the method or the assumptions you are defending.
1curi
I meant stuck in the sense of couldn't get out of. Not in the sense of could optionally remain stuck. What's the argument for that? We have knowledge about standards of criticism. We use it. Objections about starting points aren't very relevant because Popperians never said they were justified by their starting points. What's wrong with this?
2[anonymous]
I don't think there's a way out if your method doesn't eventually bottom out somewhere. If you don't have a reliable or objective way of distinguishing good criticism from bad, the act of criticism can't help you in any way, including trying to fix this standard. If you don't have objective knowledge of standards of criticism and you are unwilling to take one as an axiom, then what are you justified by?
-1curi
Nothing. Justification is a mistake. The request that theories be justified is a mistake. They can't be. They don't need to be. Using the best ideas we know of so far is a partially reliable, partially objective way which allows for progress.
0prase
Doesn't this create an infinite regress of criticisms, if you try hard enough? (Your countercriticism is also generic, when it applies to the whole category.)
2curi
If you try hard enough you can refuse to think at all. Popperian epistemology helps people learn who want to. It doesn't provide a set of rules that, if you follow them exactly while trying your best not to make progress, then you will learn anyway. We only learn much when we seriously try to, with good intentions. You can always create trivial regresses, e.g. by asking "why?" infinitely many times. But that's different than the following regress: If you assert "theories should be justified, or they are crap" and you assert "theories are justified in one way: when they are supported by a theory which is itself justified" Then you have a serious problem to deal with which is not the same type as asking "why?" forever. Things which reject entire categories is not a precise way to state what theories should be rejected. You are correct that the version I wrote can be improved to be clearer and more precise. One of the issues is whether a criticism engages with the substance of the idea it is criticizing, or not. "All ideas are wrong" (for example) doesn't engage with any of the explanations that the ideas it rejects give, it doesn't point out flaws in them, it doesn't help us learn. Criticisms which don't help us learn better are no good -- the whole purpose and meaning of criticism, as we conceive it, is you explain a flaw so we can learn better. One issue this brings up is that communication is never 100% precise. There is always ambiguity. If a person wants to, he can interpret everything you say in the worst possible way. If he does so, he will sabotage your discussion. But if he follows Popper's (not unique or original) advice to try to interpret ideas he hears as the best version they could mean -- to try to figure out good ideas -- then the conversation can work better.
[-]FAWS50

You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we have to wonder if that 99.999% figure is correct.

Subjective probabilities don't work like that. Your subjective probability just is what it is. In Bayesian terms the closest thing to a "real" probability is whatever probability estimation is the best you can do with the available data. There is no "correct" or "incorrect" subjective probability, just predictably doing worse than possible to different degrees.

2Matt_Simpson
There is a correct P(T0|X) where X is your entire state of information. Probabilities aren't strictly speaking subjective, they're subjectively objective.
1FAWS
"Subjectively objective" just means that trying to do the best you can doesn't leave any room for choice. You can argue that you aren't really talking about probabilities if you knowingly do worse than you could, but that's just a matter of semantics.
1[anonymous]
Are you saying that there is no regress problem? Yudkowsky disagrees. And so do other commenters here, one of whom called it a "necessary flaw".
4FAWS
No, just that it doesn't manifest itself in the form of a pyramid of probabilities of probabilities being "correct". There certainly is the problem of priors, and the justification for reasoning that way in the first place (which were sketched by others in the other thread).
0Manfred
Yeah, you're making a flawed argument by analogy. "There's an infinite regress in deductive logic, so therefore any attempt at justification using probability will also lead to an infinite regress." The reason that probabilistic justification doesn't run into this (or at least, not the exact analogous thing) is that "being wrong" is a definite state with known properties, that is taken into account when you make your estimate. This is very unlike deductive logic.
0timtyler
That essay seems pretty yuck to me. Agent beliefs don't normally regress to before they were conceived. They get assigned some priors around when they are born - usually by an evolutionary process.
1[anonymous]
I'm not clear on what you are saying.

I fail to see how, in practical terms, this is at all better than using induction based reasoning. It may make you feel and look smarter to tell someone they can't prove or disprove anything with certainty, but that's not exactly a stunning endorsement. You can't actually ACT as if nothing is ever conclusively true. I would like to see a short description as to WHY this is a better way to view the world and update your beliefs about it.

1curi
But I do act that way. I am a fallibilist. Are you denying fallibilism? Some people here endorsed it. Is there a Bayesian consensus on it? Why do you think I don't act like that? Because it works and makes sense. If you want applications to real life fields you can find applications to parenting, relationships and capitalism here: http://fallibleideas.com/
0SarahSrinivasan
I was interested in applications to capitalism. Is there a place on that site other than the one titled "Capitalism" which shows applications to capitalism? I saw nothing there involving fallibilism or acting as if nothing is ever conclusively true.
-5curi

Popperian epistemology still relies on deductive logic. Why is deductive logic trustworthy? (Serious question, I think it illuminates the nature of foundations)

You might argue that we conjecture that deductive logic, as we know it, is true/valid/correct and nothing that we've come up with seems to refute it - yet that doesn't mean we've "proved" that deductive logic is correct. I would go further with some positive arguments, but we'll leave it at that for now.

A Bayesian might argue that the basic assumptions that go into Bayesian epistemology (T... (read more)

2curi
Uses, especially in science. Doesn't rely on in any fundamental way for the main philosophical ideas. It is not "trustworthy". But I don't have a criticism of it. I don't reject ideas for no reason. Lacks of positive justification isn't a reason (since nothing has justification, lack of it cannot differentiate between theories, so it isn't a criticism to lack it). What is "trustworthy" in an everyday sense, but not a strict philosophical sense, is knowledge. The higher quality an idea (i.e. the more it's been improved up to now), the more trustworthy since the improvements consist of getting rid of mistakes, so there's less mistakes to bite you. But how many mistakes are left, and how big are they? Unknown. So, it's not trustworthy in any kind of definitive way. Yes. We do conjecture it. And it's not proved. So what? A difference between Popperian views and justificationist ones is Popperian views don't say anything should be justified and then fail to do it. But if you do say things should be justified, and then fail to do it, that is bad. When Bayesians or inductivists set out to justify theories (and we mean that word broadly, e.g. "support" is a type of justification. So is "having high probability" when it is applied to ideas rather than to events.), they are proposing something rather different from logic. A difference is: justificationism has been criticized while logic hasn't. The criticism is: if you say that theories should be justified, and you say that they gain their justification from other ideas which are themselves justified, then you get a regress. And if you don't say that, you face the question: how are ideas to be justified? And you better have an answer. But no known answers work. So justification has a different kind of status than logic. And also, if you accept justificationism then you face the problem of justifying logic. But if you don't, then you don't. So that's why you have to, but we don't. So you might wonder what a non-justiifi
3prase
I second here Khoth's comment. How do you decide about validity of a criticism? There are certainly people who don't understand logic, and since you have said doesn't it mean that you actually have a criticism of logic? Or does only count that you personally don't criticise it? If so, how this approach is different from accepting any idea at your wish? What's the point of having an epistemology when it actually doesn't constrain your beliefs in any way? A technical question: how do I make nested quotes?
2curi
You conjecture standards of criticism, and use them. If you think they aren't working well, you can criticize them within the system and change the, or you can conjecture new standards of criticism and use those. Note: this has already been done, and we already have standards of criticism which work pretty well and which allow themselves to be improved. (They are largely not uniquely Popperian, but well known.) Different aspect: in general, all criticisms always have some valid point. If someone is making a criticism, and it's wrong, then why wasn't he helped enough not to do that? Theories should be clear and help people understand the world. If someone doesn't get it then there is room for improvement. I don't regard logic as 'rules', in this context. But terminology is not important. The way logic figures into Popperian critical discussions is: if an idea violates logic you can criticize it for having done so. It would then in theory be possible to defend it by saying why this idea is out of the domain of logic or something (and of course you can point out if it doesn't actually violate logic) -- there's no rule against that. But no one has ever come up with a good argument of that type.
2prase
Isn't this contradicting this ? I mean, if you can judge arguments and say whether they are good, doesn't it mean that there are bad arguments which don't have a valid point?
2curi
All criticisms have some kind of point, e.g. they might highlight a need for something to be explained better. This is compatible with saying no one ever came up with a good argument (good in the context of modern knowledge) for the Earth being flat, or something. If someone thinks the Earth is flat, then this is quite a good criticism of something -- and I suspect that something is his own background knowledge. We could discus the matter. If he had some argument which addresses my round-earth views, i'd be interested. Or he might not know what they are. Shrug.
0Matt_Simpson
0prase
This works for me. However, I want to quote something inside a quote and then continue on the first level, such as The text in italic should be one quoting level deeper.
2jimrandomh
>> Inner quote > > Outer quote Yields
0prase
Thanks!
2[anonymous]
No, that's incorrect. That may be how other philosophers use the term, but that's not what it means here. Edit: To clarify, I mean that LessWrong doesn't define reductionism the same way you just did, so your argument doesn't apply.
0JoshuaZ
Um, there's a lot of criticism out there of deductive logic. For one thing, humans often make mistakes in deductive logic so one doesn't know if something is correct. For another, some philosophers have rejected the law of the excluded middle. Yet others have proposed logical systems which try to localize contradictions and prevent explosions (under the sensible argument that when a person is presented with two contradictory logical arguments that look valid to them they don't immediately decide that the moon is made of green cheese). There's a lot to criticize about deductive logic.
0timtyler
I don't think I have heard that argued. The problem of the reference machine in Occam's razor leads to a million slightly-different variations. That seems much more dubious than deduction does.
[-][anonymous]30

) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more

... (read more)
2curi
If something can explain everything (by not being adapted to addressing any particular problem) we can criticize it for doing just that. So we dispense with it.
2[anonymous]
In that case, you seem to be saying "dispense with a hypothesis if it can't explain everything, and also dispense with it if it does explain everything." How both of these can be legitimate reasons for dismissal?
2curi
If it doesn't explain everything (relevant to some problem you are trying to address), improve it. If it explains everything vacuously, reject it.
[-][anonymous]10

If you don't justify your beliefs, how are they less arbitrary than those of a Bayesian? You may say they are tied to the truth (in a non-infinite-regress-laden way) by the truth-seeking process of criticism, forming new ideas, etc. However this is also what ties a Bayesian to the truth. The Bayesian is restricted (in theory) to updating probabilities based on evidence, but we tend to accept absence/presence/content of criticisms as evidence (though we insist on talking about the truth or falsity of statements, rather than whether they're "good ideas&... (read more)

It seems you have completely talked about interalist versions of epistemology. What about Relaiblism? It does not fall into either of your categories (its one i'm pretty sympathetic towards).

Aso to make sure i understand you correctly this is arguing about getting rid of jusitifed in the standard true justified belief (also including getier part to) am i right? or are you saying something is "justified" when it can no longer be criticized (due to not being about to come up with a criticism)? I also agree with Yvain that it seems this "criticize the idea" needs to be taken apart more.

I might have more comments but need to think about it more

Many of the criticisms mentioned in the above comments have in fact been addressed by Bartley in his conception of pan-critical rationalism. See his book "The Retreat to Commitment".

Bayesian methods can be considered useful within such an epistemological system, however one cannot justify that one fact is more true than another merely based on Bayesian probabilities.

Both justificationist and falsificationist outlooks are stated with respect to something else. That is why philosophers played all those language games. They soon realised that you c... (read more)