I am very inexperienced in epistemology, so forgive me if I'm making a simple error.
But it sounds like everything important in your theory is stuck into a black box in the words "criticize the idea".
Suppose we had a computer program designed to print the words "I like this idea" to any idea represented as a string with exactly 5 instances of the letter 'e' in it, and the words "I dislike this idea because it has the wrong number of 'e's in it" to any other idea.
And suppose we had a second computer program designed to print "I like this idea" to any idea printed on blue paper, and "I dislike this idea because it is on the wrong color paper" to any idea printed on any other color of paper.
These two computers could run through your decision making process of generating and criticizing ideas, and eventually would settle on the first idea generated which was written on blue paper and which used the letter 'e' exactly five times.
So it would seem that for this process to capture what we mean by "truth", you have to start out with some reasoners who already have a pretty good set of internal reasoning processes kind of like our ow...
Regarding the technical side of your post, if a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X. (If you don't trust your own transistors, add error-correction at the level of transistors, don't contaminate the software.) But it's hard to think of a situation where the program will need to make use of the latter probability.
Regarding the substance, I think you disagree with popular opinion on LW because there are two possible meanings of "epistemology":
1) If a human wants to have rational beliefs, what rulebook should they follow?
2) If we want to write a computer program that arrives at rational beliefs, what algorithms should we use?
From your posts and comments it looks like you're promoting Popperianism as an answer to (1). The problem is, it's pretty hard to determine whether a given answer to (1) is right, wrong or meaningless, when it's composed of mere words (cognitive black boxes) and doesn't automatically translate to an answer for (2). So most LWers think that (2) is really the right question to ask, and any non-confused answer to (2)...
I''m willing to reformulate like this:
1) How can a human sort out good ideas from bad ideas?
2) How can a computer program sort out good ideas from bad ideas?
and the subsequent paragraph can stay unchanged. Whatever recipe you're proposing to improve human understanding, it ought to be "reductionist" and apply to programs too, otherwise it doesn't meet the LW standard. Whether AIs can be more rational than people is beside the point.
I'm not all that sure that this is going anywhere helpful, but since curi has asked for objections to Critical Rationalism, I might as well make mine known.
My first objection is that it attempts to "solve" the problem of induction by doing away with an axiom. Yes, if you try to prove induction, you get circularity or infinite regress. That's what happens when you attempt to prove axioms. The Problem of Induction essentially amounts to noticing that we have axioms on which inductive reasoning rests.
Popperian reasoning could be similarly "ref...
You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, or it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too Step 3) All criticized ideas are rejected. They're flawed. They're not good enough.
tl;dr
I've just Criticised you...
You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we have to wonder if that 99.999% figure is correct.
Subjective probabilities don't work like that. Your subjective probability just is what it is. In Bayesian terms the closest thing to a "real" probability is whatever probability estimation is the best you can do with the available data. There is no "correct" or "incorrect" subjective probability, just predictably doing worse than possible to different degrees.
I fail to see how, in practical terms, this is at all better than using induction based reasoning. It may make you feel and look smarter to tell someone they can't prove or disprove anything with certainty, but that's not exactly a stunning endorsement. You can't actually ACT as if nothing is ever conclusively true. I would like to see a short description as to WHY this is a better way to view the world and update your beliefs about it.
Popperian epistemology still relies on deductive logic. Why is deductive logic trustworthy? (Serious question, I think it illuminates the nature of foundations)
You might argue that we conjecture that deductive logic, as we know it, is true/valid/correct and nothing that we've come up with seems to refute it - yet that doesn't mean we've "proved" that deductive logic is correct. I would go further with some positive arguments, but we'll leave it at that for now.
A Bayesian might argue that the basic assumptions that go into Bayesian epistemology (T...
...) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more
If you don't justify your beliefs, how are they less arbitrary than those of a Bayesian? You may say they are tied to the truth (in a non-infinite-regress-laden way) by the truth-seeking process of criticism, forming new ideas, etc. However this is also what ties a Bayesian to the truth. The Bayesian is restricted (in theory) to updating probabilities based on evidence, but we tend to accept absence/presence/content of criticisms as evidence (though we insist on talking about the truth or falsity of statements, rather than whether they're "good ideas&...
It seems you have completely talked about interalist versions of epistemology. What about Relaiblism? It does not fall into either of your categories (its one i'm pretty sympathetic towards).
Aso to make sure i understand you correctly this is arguing about getting rid of jusitifed in the standard true justified belief (also including getier part to) am i right? or are you saying something is "justified" when it can no longer be criticized (due to not being about to come up with a criticism)? I also agree with Yvain that it seems this "criticize the idea" needs to be taken apart more.
I might have more comments but need to think about it more
Many of the criticisms mentioned in the above comments have in fact been addressed by Bartley in his conception of pan-critical rationalism. See his book "The Retreat to Commitment".
Bayesian methods can be considered useful within such an epistemological system, however one cannot justify that one fact is more true than another merely based on Bayesian probabilities.
Both justificationist and falsificationist outlooks are stated with respect to something else. That is why philosophers played all those language games. They soon realised that you c...
Branching from: http://lesswrong.com/lw/54u/bayesian_epistemology_vs_popper/3uta?context=4
The question is: how do you make decisions without justifying decisions, and without foundations?
If you can do that, I claim the regress problem is solved. Whereas induction, for example, is refuted by the regress problem (no, arbitrary foundations or circular arguments are not solutions).
OK stepping back a bit, and explaining less briefly:
Infinite regresses are nasty problems for epistemologies.
All justificationist epistemologies have an infinite regress.
That means they are false. They don't work. End of story.
There's options of course. Don't want a regress? No problem. Have an arbitrary foundation. Have an unjustified proposition. Have a circular argument. Or have something else even sillier.
The regress goes like this, and the details of the justification don't matter.
If you want to justify a theory, T0, you have to justify it with another theory, T1. Then T1 needs justify by T2. Which needs justifying by T3. Forever. And if T25 turns out wrong, then T24 loses it's justification. And with T24 unjustified, T23 loses its justification. And it cascades all the way back to the start.
I'll give one more example. Consider probabilistic justification. You assign T0 a probability, say 99.999%. Never mind how or why, the probability people aren't big on explanations like that. Just do your best. It doesn't matter. Moving on, what we have to wonder if that 99.999% figure is correct. If it's not correct then it could be anything such at 90% or 1% or whatever. So it better be correct. So we better justify that it's a good theory. How? Simple. We'll use our whim to assign it a probability of 99.99999%. OK! Now we're getting somewhere. I put a lot of 9s so we're almost certain to be correct! Except, what if I had that figure wrong? If it's wrong it could be anything such as 2% or 0.0001%. Uh oh. I better justify my second probability estimate. How? Well we're trying to defend this probabilistic justification method. Let's not give up yet and do something totally differently, instead we'll give it another probability. How about 80%? OK! Next I ask: is that 80% figure correct? If it's not correct, the probability could be anything, such as 5%. So we better justify it. So it goes on and on forever. Now there's two problems. First it goes forever, and you can't ever stop, you've got an infinite regress. Second, suppose you stopped have some very large but finite number of steps. Then the probability the first theory is correct is arbitrarily small. Because remember that at each step we didn't even have a guarantee, only a high probability. And if you roll the dice a lot of times, even with very good odds, eventually you lose. And you only have to lose once for the whole thing to fail.
OK so regresses are a nasty problem. They totally ruin all justificationist epistemologies. That's basically every epistemology anyone cares about except skepticism and Popperian epistemology. And forget about skepticism, that's more of an anti-epistemology than an epistemology: skepticism consists of giving up on knowledge.
Now we'll take a look at Popper and Deutsch's solution. In my words, with minor improvements.
Regresses all go away if we drop justification. Don't justify anything, ever. Simple.
But justification had a purpose.
The purpose of justification is to sort out good ideas from bad ideas. How do we know which ideas are any good? Which should we believe are true? Which should we act on?
BTW that's the same general problem that induction was trying to address. And induction is false. So that's another reason we need a solution to this issue.
The method of addressing this issue has several steps, so try to follow along.
Step 1) You can suggest any ideas you want. There's no rules, just anything you have the slightest suspicion might be useful. The source of the ideas, and the method of coming up with them, doesn't matter to anything. This part is easy.
Step 2) You can criticize any idea you want. There's no rules again. If you don't understand it, that's a criticism -- it should have been easier to understand. If you find it confusing, that's a criticism -- it should have been clearer. If you think you see something wrong with it, that's a criticism -- it shouldn't have been wrong it that way, *or* it should have included an explanation so you wouldn't make a mistaken criticism. This step is easy too.
Step 3) All criticized ideas are rejected. They're flawed. They're not good enough. Let's do better. This is easy too. Only the *exact* ideas criticized are rejected. Any idea with at least one difference is deemed a new idea. It's OK to suggest new ideas which are similar to old ideas (in fact it's a good idea: when you find something wrong with an idea you should try to work out a way to change it so it won't have that flaw anymore).
Step 4) If we have exactly one idea remaining to address some problem or question, and no one wants to revisit the previous steps at this time, then we're done for now (you can always change your mind and go back to the previous steps later if you want to). Use that idea. Why? Because it's the only one. It has no rivals, no known alternatives. It stands alone as the only non-refuted idea. We have sorted out the good ideas from the bad -- as best we know how -- and come to a definite answer, so use that answer. This step is easy too!
Step 5) What if we have a different number of ideas left over which is not exactly one? We'll divide that into two cases:
Case 1) What if we have two or more ideas? This one is easy. There is a particular criticism you can use to refute all the remaining theories. It's the same every time so there's not much to remember. It goes like this: idea A ought to tell me why B and C and D are wrong. If it doesn't, it could be better! So that's a flaw. Bye bye A. On to idea B: if B is so great, why hasn't it explained to me what's wrong with A, C and D? Sorry B, you didn't answer all my questions, you're not good enough. Then we come to idea C and we complain that it should have been more help and it wasn't. And D is gone too since it didn't settle the matter either. And that's it. Each idea should have settled the matter by giving us criticisms of all its rivals. They didn't. So they lose. So whenever there is a stalemate or a tie with two or more ideas then they all fail.
Case 2) What if we have zero ideas? This is crucial because case one always turns into this! The answer comes in two main parts. The first part is: think of more ideas. I know, I know, that sounds hard. What if you get stuck? But the second part makes it easier. And you can use the second part over and over and it keeps making it easier every time. So you just use the second part until it's easy enough, then you think of more ideas when you can. And that's all there is to it.
OK so the second part is this: be less ambitious. You might worry: but what about advanced science with its cutting edge breakthroughs? Well, this part is optional. If you can wait for an answer, don't do it. If there's no hurry, then work on the other steps more. Make more guesses and think of more criticisms and thus learn more and improve your knowledge. It might not be easy, but hey, the problem we were looking at is how to sort out good ideas from bad ideas. If you want to solve hard problems then it's not easy. Sorry. But you've got a method, just keep at it.
But if you have a decision to make then you need an answer now so you can make your decision. So in that case, if you actually want to reach a state of having exactly one theory which you can use now, then the trick is when you get stuck be less ambitious. I think how you can see how that would work in general terms. Basically if human knowledge isn't good enough to give you an answer of a certain quality right now, then your choices are either to work on it more and not have an answer now, or accept a lower quality answer. You can see why there isn't really any way around that. There's no magic way to always get a top quality answer now. If you want a cure for cancer, well I can't tell you how to come up with one in the next five minutes, sorry.
This is a bit vague so far. How does lowering your standards address the problem. So what you do is propose a new idea like this, "I need to do something, so I will do..." and then you put whatever you want (idea A, idea B, some combination, whatever).
This new idea is not refuted by any of the existing criticisms. So now you have one idea, it isn't refuted, and you might be done. If you're happy with it, great. But you might not be. Maybe you see something wrong with it, or you have another proposal. That's fine; just go back to the first three steps and do them more. Then you'll get to step 4 or 5 again.
What if we get back here? What do we do the second time? The third time? We simply get less ambitious each time. The harder a time we're having, the less we should expect. And so we can start criticizing any ideas that aim too high.
BTW it's explained on my website here, including an example:
http://fallibleideas.com/avoiding-coercion
Read that essay, keeping in mind what what I've been saying, and hopefully everything will click. Just bear in mind that when it talks about cooperation between people, and disagreements between people, and coming up with solutions for people -- when it discusses ideas in two or more separate minds -- everything applies exactly the same if the two or more conflicting ideas are all in the same mind.
What if you get real stuck? Well why not do the first thing that pops into your head? You don't want to? Why not? Got a criticism of it? It's better than nothing, right? No? If it's not better than nothing, do nothing! You think it's silly or dumb? Well so what? If it's the best idea you have then it doesn't matter if it's dumb. You can't magically instantly become super smart. You have to use your best idea even if you'd like to have better ideas.
Now you may be wondering whether this approach is truth-seeking. It is, but it doesn't always find the truth immediately. If you want a resolution to a question immediately then its quality cannot exceed today's knowledge (plus whatever you can learn in the time allotted). It can't do better than the best that is known how to do. But as far as long term progress, the truth seeking came in those first three steps. You come up with ideas. You criticize those ideas. Thereby you eliminate flaws. Every time you find a mistake and point it out you are making progress towards the truth. That's how we approach the truth: not by justifying but by identify mistakes and learning better. This is evolution, it's the solution to Paley's problem, it's discussed in BoI and on my Fallible Ideas website. And it's not too hard to understand: improve stuff, keep at it, and you get closer to the truth. Mistake correcting -- criticism -- is a truth-seeking method. That's where the truth-seeking comes from.