Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: curi 02 December 2017 05:38:37AM 0 points [-]

Is the crackpot being responsive to the issues and giving arguments – arguments are what matter, not people – or is he saying non-sequiturs and refusing to address questions? If he speaks to the issues we can settle it quickly; if not, he isn't participating and doesn't matter. If we disagree about the nature of what's taking place, it can be clarified, and I can make a judgement which is open to Paths Forward. You seem to wish to avoid the burden of this judgement by hedging with a "probably".

Fallibility isn't an amount. Correct arguments are decisive or not; confusion about this is commonly due to vagueness of problem and context (which are not matters of probability and cannot be accurately summed up that way). See https://yesornophilosophy.com

Comment author: Viliam 02 December 2017 03:18:50PM *  2 points [-]

I wish to conclude this debate somehow, so I will provide something like a summary:

If I understand you correctly, you believe that (1) induction and probabilities are unacceptable for science or "critical rationalism", and (2) weighing evidence can be replaced by... uhm... collecting verbal arguments and following a flowchart, while drawing a tree of arguments and counter-arguments (hopefully of a finite size).

I believe that you are fundamentally wrong about this, and that you actually use induction and probabilities.

First, because without induction, no reasoning about the real world is possible. Do you expect that (at least approximately) the same laws of physics apply yesterday, today, and tomorrow? If they don't, then you can't predict anything about the future (because under the hypothetical new laws of physics, anything could happen). And you even can't say anything about the past, because all our conclusions about the past are based on observing what we have now, and expecting that in the past it was exposed to the same laws of physics. Without induction, there is no argument against "last Thursdayism".

Second, because although to refuse to talk about probabilities, and definitely object against using any numbers, some expressions you use are inherently probabilistic; you just insist on using vague verbal descriptions, which more or less means rounding the scale of probability from 0% to 100% into a small number of predefined baskets. There is a basket called "falsified", a basket called "not falsified, but refuted by a convincing critical argument", a basket called "open debate; there are unanswered critical arguments for both sides", and a basket called "not falsified, and supported by a convincing critical argument". (Well, something like that. The number and labels of the baskets are most likely wrong, but ultimately, you use a small number of baskets, and a flowchart to sort arguments into their respective baskets.) To me, this sounds similar to refusing to talk about integers, and insisting that the only scientifically valid values are "zero", "one", "a few", and "many". I believe that in real life you can approximately distinguish whether you chance of being wrong is more in the order of magnitude "one in ten" or "one in a million". But your vocabulary does not allow to make this distinction; there is only the unspecific "no conclusion" and the unspecific "I am not saying it's literally 100% sure, but generally yes"; and at some point of the probability scale you will make the arbitrary jump from the former to the latter, depending on how convincing is the critical argument.

On your website, you have a strawman powerpoint presentation about how people measure "goodness of an idea" by adding or removing goodness points, on a scale 0-100. Let me tell you that I have never seen anyone using or supporting that type of scale; neither on Less Wrong, nor anywhere else. Specifically, Bayes Theorem is not about "goodness" of an idea; it is about mathematical probability. Unlike "goodness", probabilities can actually be calculated. If you put 90 white balls and 10 black balls in a barrel, the probability of randomly drawing a white ball is 90%. If there is one barrel containing 90 white balls and 10 black balls, and another barrel containing 10 white balls and 90 black balls, and you choose a random barrel, randomly draw five balls, and get e.g. four white balls and one black ball, you can calculate the probability of this being the first or the second barrel. It has nothing to do with "goodness" of the idea "this is the first barrel" or "this is the second barrel".

My last observation is that your methodology of "let's keep drawing the argument tree, until we reach the conclusion" allows you to win debates by mere persistence. All you have to do is keep adding more and more arguments, until your opponent says "okay, that's it, I also have other things to do". Then, according to your rules, you have won the debate; now all nodes at the bottom of the tree are in favor of your argument. (Which is what I also expect to happen right now.)

And that's most likely all from my side.

Comment author: curi 01 December 2017 07:35:31PM 0 points [-]

Critical Rationalism (CR)

CR is an epistemology developed by 20th century philosopher Karl Popper. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas. Epistemology says what reason is and how it works (except the epistemologies which reject reason, which we’ll ignore). Epistemology is the most important intellectual field, because reason is used in every other field. How do you figure out which ideas are good in politics, physics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them at a large intellectual disadvantage.

Epistemology offers methods, not answers. It doesn’t tell you which theory of gravity is true, it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish, instead it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology tells you how to handle disagreements (which are common to every field).

CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics – anything – not just science, observation, data and prediction. CR can even evaluate itself.


CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (not a mistake). And mistakes are common so we shouldn’t try to ignore fallibility (it’s not a rare edge case). It’s also impossible to get a 99% or even 1% guarantee that an idea is true. Some mistakes are unpredictable because they involve issues that no one has thought of yet.

There are decisive logical arguments against attempts at infallibility (including probabilistic infallibility).

Attempts to dispute fallibilism are refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a way around.

CR’s response to this is to accept our fallibility and figure out how to deal with it. But that’s not what most philosophers have done since Aristotle.

Most philosophers think knowledge is justified, true belief, and that they need a guarantee of truth to have knowledge. So they have to either get around fallibility or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working computers and space shuttles. But there’s no way around fallibility, so philosophers have been deeply confused, come up with dumb ideas, and given philosophy a bad name.

So philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The way out is to check your premises. CR solves this problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to the incorrect “justified, true belief” theory of knowledge and the perspective behind it.

Justification is the Major Error

The standard perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, so we try to argue for the idea to show it’s good. We try to prove it, or approximate proof in some lesser way. A new idea starts with no status (it’s a mere guess, hypothesis, speculation), and can become knowledge after being justified enough.

Justification is always due to some thing providing the justification – be it a person, a religious book, or an argument. This is fundamentally authoritarian – it looks for things with authority to provide justification. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification. Which arguments have the authority to provide justification? That status has to be granted by some prior source of justification, which leads to another regress.

Fallible Knowledge

CR says we don’t have to justify our beliefs, instead we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.

When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge). Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building space shuttles that successfully reach the moon). You don’t need justification or perfection to reach the moon, you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problems and is compatible with fallibility.

The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.

CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we’ll make, and the better our ideas will be.”

Guesses and Criticism

Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how knowledge can be created.)

How do you know which critical arguments are correct? Wrong question. You just guess it, and the critical arguments themselves are open to criticism. What if you miss something? Then you’ll be mistaken, and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).

CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.

Science and Evidence

CR pays extra attention to science. First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes some empirical claim about reality.

Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but no evidence for. Evidence is either compatible with a hypothesis, or not, and no amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data.

These two points are where CR has so far had the largest influence on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence. (Parts of this are now taken for granted by many people who don’t realize they’re fairly new ideas.)

CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can keep track of. So we have to do a targeted search according to some guesses about what might be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.

We also need to interpret our evidence. We don’t see puppies, we see photons which we interpret as meaning there is a puppy over there. This interpretation is fallible – sometimes people are confused by mirrors, mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), etc.

Comment author: Viliam 02 December 2017 01:50:10AM *  0 points [-]

Seems like these "critical arguments" do a lot of heavy lifting.

Suppose you make a critical argument against my hypothesis, and the arguments feels smart to you, but silly to me. I make a counter-argument, which to me feels like it completely demolished your position, but in your opinion it just shows how stupid I am. Suppose the following rounds of arguments are similarly fruitless.

Now what?

In a situation between a smart scientist who happens to be right, and a crackpot that refuses admitting the smallest mistake, how would you distinguish which is which? The situation seems symmetrical; both sides are yelling at each other, no progress on either side.

Would you decide by which argument seems more plausible to you? Then you are just another person in a 3-people ring, and the current balance of powers happens to be 2:1. Is this about having a majority?

Or would you decide that "there is no answer" is the right answer? In that case, as long as there remains a single crackpot on this planet, we have a scientific controversy. (You can't even say that the crackpot is probably wrong, because that would be probabilistic reasoning.)

You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are making some mistakes without realizing it.

Seems to me you kinda admit that knowledge is ultimately uncertain (i.e. probabilistic), but you refuse to talk about probabilities. (Related LW concept: "Fallacy of gray ".) We are fallible, but it is wrong to make a guess how much. We resolve experimentally uncertain hypotheses by verbal fights, which we pretend to have exactly one of three outcomes: "side A lost", "side B lost", "neither side lost"; nothing in between, such as "side A seems 3x more convincing than side B". I mean, if you start making too many points on a line, it would start to resemble a continuum, and your argument seems to be that there is no quantitative certainty, only qualitative; that only 0, 1, and 0.5 (or perhaps NaN) are valid probabilities of a hypothesis.

Okay, I feel like am already repeating myself.

Comment author: Viliam 01 December 2017 02:03:07PM *  0 points [-]

Disclosure: I didn't read Popper in original (nor do I plan to in the nearest future; sorry, other priorities), I just had many people mention his name to me in the past, usually right before they shot themselves in their own foot. It typically goes like this:

There is a scientific consensus (or at least current best guess) about X. There is a young smart person with their pet theory Y. As the first step, they invoke Popper to say that science didn't actually prove X, because it is not the job of science to actually prove things; science can merely falsify hypotheses. Therefore, the strongest statement you can legitimately make about X is: "So far, science has not falsified X". Which is coincidentally also true about Y (or about any other theory you make up on the spot). Therefore, from the "naively Popperian" perspective, X and Y should have equal status in the eyes of science. Except that so far, much more attention and resources have been thrown at X, and it only seems fair to throw some attention and resources at Y now; and if scientists refuse to do that, well, they fail at science. Which should not be surprising at all, because it is known that scientists generally fail at science; <insert reference to Nassim Taleb, Malcolm Gladwell, or Stephen Jay Gould>.

After reading your summary of Popper (thanks, JenniferRM), my impression is that Popper did a great job debunking some mistaken opinions about science; but ironically, became himself an often-quoted source for other mistaken opinions about science. (I should probably not blame Popper here, but rather the majority of his fans.)

The naive version of science (unfortunately, still very popular in humanities) that Popper refuted goes approximately like this (of course, lot of simplification):

The scientist reads a lot of scientific texts written by other scientists. After a few years, the scientist starts seeing some patterns in the nature. He or she makes an experiment or two which seem to fit the pattern, and describes those patterns and experiments on paper. Their colleagues are impressed by the description; the paper passes peer review, becomes published in a scientific journal, and becomes a new scientific text that the following generations of scientists will study. Now the case is closed, and anyone who doubts the description will face the wrath of the scientific community. (At least until later a higher-status scientist publishes an opposite statement, in which case the history is rewritten, and the new description becomes the scientific fact.)

And the "naively Popperian" opposite perspective (again, simplified a lot) goes like this:

Scientists generate hypotheses by an unspecified process. It is a deeply mysterious process, about which nothing specific is allowed to be said, because that would be unscientific. It is only required that the hypotheses be falsifiable in principle. Then you keep throwing resources at them. Some of them get falsified, some keep surviving. And all that a good scientist is allowed to say about them is "this hypothesis was falsified" or "this hypothesis was not falsified yet". Anything beyond that is failing at science. For example, saying "Well, this goes against almost everything we know about nature, is incredibly complicated, and while falsifiable in principle, it would require a budget of $10^10 and some technology that doesn't even exist yet, so... why are we even talking about this, when we have a much simpler theory that is well-supported by current experiments?" is something that a real scientist would never do.

I admit that perhaps, given unlimited amount of resources, we could do science in the "naively Popperian" way. (This is how AIXI would do it, perhaps to its own detriment.) But this is not how actual science works in real life; and not even how idealized science with fallible-but-morally-flawless scientists could work. In real life, the probability of tested hypothesis is better than random. For example, if there is a 1 : 1000000 chance that a random molecule could cure a disease X, it usually requires much less that 1000000 studies to find the cure for X. (A pharmaceutical company with a strategy "let's try random molecules and do scientific studies whether they cure X" would go out of business. Even a PhD student throwing together random sequences of words and trying to falsify them would probably fail to get their PhD.) Falsification can be the last step in the game, but it's definitely not the only step.

If I can make an analogy with evolution (of course, analogies can only get us so far, then they break), induction and falsification are to science what mutation and selection are to evolution. Without selection, we would get utter chaos, filled by mostly dysfunctional mutants (or more like just unliving garbage). But without mutation, at best we would get "whatever was the fittest in the original set". Note that a hypothetical super-mutation where the original organism would be completely disassembled to atoms, and then reconstructed in a completely original random way, would also fail to produce living organisms (until we would throw unlimited resources at the process, which would get us all possible organisms). On the other hand, if humans create an unnatural (but capable of surviving) organism in a lab and release it in the wild, evolution can work with that, too.

Similarly, without falsification, science would be reduced to yet another channel for fashionable dogma and superstition. But without some kind of induction behind the scenes, it would be reduced to trying random hypotheses, and failing at every hypothesis longer than 100 words. And again, if you derive a hypothesis by a method other than induction, science can work with that, too. It's just, the less the new hypothesis is related to what we already know about the nature, the smaller the chance it could be right. So in real life, most new hypotheses that survive the initial round of falsifications are generated by something like induction. We may not talk about it, but that's how it is. It is also a reason why scientists study existing science before inventing their own hypotheses. (In a hypothetical world where induction does not work, all they would have to do is study the proper methods of falsification.)

Related chapter of the Less Wrong Sequences: "Einstein's Arrogance".

tl;dr -- "induction vs falsification" is a false dilemma

(BTW, I agree with gjm's reponse to your last reply in our previous discussion, so I am not going to write my own.)

EDIT: By the way, there is a relatively simple way to cheat the falsifiability criterium by creating a sequence of hypotheses, where each one of them is individually technically falsifiable, but the sequence as a whole is not. So when the hypothesis H42 gets falsified, you just move to hypothesis H43 and point out that H43 is falsifiable (and different from H42, therefore the falsification of H42 is be irrelevant in this debate), and demand that scientists either investigate H43 or admit that they are dogmatic and prejudiced against you.

As an example, let hypothesis H[n] be: "If you accelerate a proton to 1 - 1/10^n of speed of light, a Science Fairy will appear and give you a sticker." Suppose we have experimentally falsified H1, H2, and H3; what would that say about H4 or say H99? (Bonus points if you can answer this question without using induction.)

Comment author: curi 20 November 2017 12:28:18AM 0 points [-]

I believe that your belief in "refutation by criticism" as something that either is or isn't, but doesn't have "gradation of certainty", is so fundamentally wrong that it doesn't make sense to debate further.

I think there's something really wrong when your reaction to disagreement is to think there's no point in further discussion. That leaves me thinking you're a bad person to discuss with. Am I mistaken?

Making mistakes isn't random or probabilistic. When you make a judgement, there is no way to know some probability that your judgement is correct. Also, if judgements need probabilities, won't your judgement of the probability of a mistake have its own probability? And won't that judgement also have a probability, causing an infinite regress of probability assignments?

Mistakes are unpredictable. At least some of them are. So you can't predict (even probabilistically) whether you made one of the unpredictable types of mistakes.

What you can do, fallibly and tentatively, is make judgements about whether a critical argument is correct or not. And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn't solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn't).

So let me ask you; is Popper's argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?

That'd work fine if they knew everything or nothing about induction. However, it's highly problematic when they already have thousands of pages worth of misconceptions about induction (some of which vary from the next guy's misconceptions). The misconceptions include vague parts they don't realize are vague, non sequiturs they don't realize are non sequiturs, confusion about what induction is, and other mistakes plus cover up (rationalizations, dishonesty, irrationality).

Induction would be way easier to explain to a 10 year old in a page than to anyone at LW, due to lack of bias and prior misconceptions. I could also do quantum physics in a page for a ten year old. QM is easy to explain at a variety of levels of detail, if you don't have to include anything to preemptively address pre-existing misconceptions, objections, etc. E.g., in a sentence: "Science has discovered there are many things your eyes can't see, including trillions of other universes with copies of you, me, the Earth, the sun, everything."

Comment author: Viliam 20 November 2017 01:09:29AM *  0 points [-]

I think there's something really wrong when your reaction to disagreement is to think there's no point in further discussion.

It's like you believe "A" and "A implies B" and "B implies C", while I believe "non-A" and "non-A implies Q". The point we should debate is whether "A" or "non-A" is correct; because as long as we disagree on this, of course each of us is going to believe a different chain of things (one starting with "A", the other starting with "non-A").

I mean, if I hypothetically would believe that absolute certainty is possible and relatively simple to achieve, of course I would consider the probabilistic reasoning to be interesting but inferior form of reasoning. We wouldn't have this debate. And if you would accept that certainty is impossible (even certainty of refutation), then probability would probably seem like the next best thing.

When you make a judgement, there is no way to know some probability that your judgement is correct.

Okay, imagine this: I make a judgment that feels completely correct to me, and I am not aware of any possible mistakes. But of course I am a fallible human, maybe I actually made a mistake somewhere, maybe even an embarassing one.

Scenario A: I made this judgement at 10 AM, after having a good night of sleep.

Scenario B: I made this judgement at 2 AM, tired and sleep deprived.

Does it make sense to say that the probability of making the mistake in the judgment B is higher than the probability of making the mistake in the judgment A? In both cases I believe at the moment that the judgment is correct. But in the latter case my ability to notice the possible mistake is smaller.

So while I couldn't make an exact calculation like "the probability of the mistake is exactly 4.25%", I can still be aware that there is some probability of the mistake, and sometimes even estimate that the probability in one situation is greater than in another situation. Which suggests that there is a number, I just don't know it. (But if we could somehow repeat the whole situation million times, and observe that I was wrong in 42500 cases, that would suggest that the probability of the mistake is about 4.25%. Unlikely in real life, but possible as a hypothesis.)

Also, if judgements need probabilities, won't your judgement of the probability of a mistake have its own probability?

It definitely will. Notice that those are two different things: (a) the probability that I am wrong, and (b) my estimate of the probability that I am wrong.

Yes, what you point out is a very real and very difficult problem. Estimating probabilities in a situation where everything (including our knowledge of ourselves, and even our knowledge of math itself) is... complicated. Difficult to do, and even more difficult to justify in a debate.

This may even be a hard limit on human certainty. For example, if at every moment of time there is a 0.000000000001 probability that you will go insane, that would mean you can never be sure about anything with probability greater than 0.999999999999, because there is always the chance that however logical and reasonable something sounds to you at the moment, it's merely because you have become insane at this very moment. (The cause of insanity could be e.g. a random tumor or a blood vessel breaking in your brain.) Even if you would make a system more reliable than a human, for example a system maintained by hundred humans, where if anyone goes insane, the remaining ones will notice it and fix the mistake, the system itself could achieve higher certainty, but you, as an individual, reading its output, could not. Because there would always be the chance that you just got insane, and what you believe you are reading isn't actually there.

Relevant LW article: "Confidence levels inside and outside an argument".

And you can, when being precise, formulate all problems in a binary way (a given thing either does or doesn't solve it) and consider criticisms binarily (a criticism either explains why a solution fails to solve the binary problem, or doesn't).

Suppose the theory predicts that an energy of a particle is 0.04 whatever units, and my measurement detected 0.041 units. Does this falsify the theory? Does 0.043, or 0.05, or 0.08? Even when you specify the confidence interval, it is ultimately a probabilistic answer. (And saying "p<0.05" is also just an arbitrary number; why not "p<0.001"?)

You can have a "binary" solution only as long as you remain in the realm of words. ("Socrates is a human. All humans are mortal. Therefore Socrates is mortal. Certainty of argument: 100%.") Even there, the longer chain of words you produce, the greater chance that you made a mistake somewhere. I mean, if you imagine a syllogism going over thousand pages, ultimately proving something, you would probably want to check the whole book at least two or three times; which means you wouldn't feel a 100% certainty after the first reading. But the greater problems will appear on the boundary between the words and reality. (Theory: "the energy of the particle X is 0.04 units"; the experimental device displays 0.041. Also, the experimental devices sometimes break, and your assistant sometimes records the numbers incorrectly.)

it's highly problematic when they already have thousands of pages worth of misconceptions

Fair point.

(BTW, I'm going offline for a week now; for reasons unrelated to LW or this debate.)


For the record: Of course there are things where I consider the probability to be so high or so low that I treat them for all practical purposes as 100% or 0%. If you ask me e.g. whether gravity exists, I will simply say "yes"; I am not going to role-play Spock and give you a number with 15 decimal places. I wouldn't even know exactly how many nines are there after the decimal dot. (But again, there is a difference between "believing there is a probability" and "being able to tell the exact number".)

The most obvious impact of probabilistic reasoning on my behavior is that I generally don't trust long chains of words. Give me 1000 pages of syllogisms that allegedly prove something, and my reaction will be "the probability that somewhere in that chain is an error is so high that the conclusion is completely unreliable". (For example, I am not even trying to understand Hegel. Yeah, there are also other reasons to distrust him specifically, but I would not trust such long chain of logic without experimental confirmation of intermediate results from any author.)

Comment author: curi 18 November 2017 07:18:31PM *  0 points [-]

(b) what you believe are LW beliefs about induction,

when i asked for references to canonical LW beliefs, i was told that would make it a cult, and LW does not have beliefs about anything. since no pro-LW ppl could/would state or link to LW's beliefs about induction – and were hostile to the idea – i think it's unreasonable to ask me to. individual ppl at LW vary in beliefs, so how am i supposed to write a one-size-fits-all criticism? LW ppl offer neither a one-size-fits-all pro-induction explanation nor do any of them offer it individually. e.g. you have not said how you think induction works. it's your job, not mine, to come up with some version of induction which you think actually works – and to do that while being aware of known issues that make that a difficult project.

again, there are methodology issues. unless LW gives targets for criticism – written beliefs anyone will take responsibility for the correctness of (you can do this individually, but you don't want to – you're busy, you don't care, whatever) – then we're kinda stuck (given also the unwillingness to address CR).

your refusal to use outside sources is asking me to rewrite material. why? some attempt to save time on your part. is that the right way to save time? no. could we talk about the right ways to save time? if you wanted to. but my comments about the right way to save time are in outside sources, primarily written by me, which you therefore won't read (e.g. the Paths Forward stuff, and i could do the Popper stuff linking only to my own stuff, which i have tons of, but that's still an outside source. i could copy/paste my own stuff here, but that's stupid. it's also awkward b/c i've intentionally not rewritten essays already written by my colleagues, b/c why do that? so i don't have all the right material written by myself personally, on purpose, b/c i avoid duplication.). so we're kinda stuck there. i don't want to repeat myself for literally more than the 50th time, for you personally (who hasn't offered me anything – not even much sign you'll pay attention, care, keep replying next week, anything), b/c you won't read 1) Popper 2) Deutsch 3) my own links to myself 4) my recent discussions with other LW ppl where i already rewrote a bunch of anti-induction arguments and wasn't answered.

as one example of many links to myself that you categorically don't want to address:

http://curi.us/1917-rejecting-gradations-of-certainty (including the comments)

Comment author: Viliam 19 November 2017 03:55:12PM 0 points [-]

In the linked article, you seem to treat "refutation by criticism" as something absolute. Either something is refuted by criticism, or it isn't refuted by criticism; and in either case you have 100% certainty about which one of these two options it is.

There seems to be no space for situations like "I've read a quite convincing refutation of something, but I still think there is a small probability there was a mistake in this clever verbal construction". It either "was refuted" or it "wasn't refuted"; and as long as you are willing to admit some probability, I guess it by default goes to the "wasn't refuted" basket.

In other words, if you imagine a variable containing value "X was refuted by criticism", the value of this variable at some moment switches from 0 to 1, without any intermediate values. I mean, if you reject gradations of certainty, then you are left with a black-and-white situation where either you have the certainty, or you don't; but nothing in between.

If this is more or less correct, then I am curious about what exactly happens in the moment where the variable actually switches from 0 to 1. Imagine that you are doing some experiments, reading some verbal arguments, and thinking about them. At some moment, the variable is at 0 (the hypothesis was not refuted by criticism yet), and at the very next moment the variable is at 1 (the hypothesis was refuted by criticism). What exactly happened during that last fraction of a second? Some mental action, I guess, like connecting two pieces of a puzzle together, or something like this. But isn't there some probability that you actually connected those two pieces incorrectly, and maybe you will notice this only a few seconds (or hours, days, years) later? In other words, isn't the "refutation by criticism" conditional on the probability that you actually understood everything correctly?

If, as I incorrectly said in previous comments, one experiment doesn't constitute refutation of a hypothesis (because the experiment may be measured or interpreted incorrectly), then what exactly does? Two experiments? Seven experiments? Thirteen experiments and twenty four pages of peer-reviewed scientific articles? Because if you refute "gradations of certainty", then it must be that at some moment the certainty is not there, and at another moment there is... and I am curious about where and why is that moment.

your refusal to use outside sources is asking me to rewrite material. why?

Throwing books at someone is generally known as "courtier's reply". The more text you throw at me, the smaller probability that I would read them. (Similarly, I could tell you to read Korzybski's Science and Sanity, and only come back after you mastered it, because I believe -- and I truly do -- that it is related to some mistakes you are making. Would you?)

There are some situations when things cannot be explained by a short text. For example, if a 10-years old kid would ask me to explain him quantum physics in less than 1 page of text, I would give up. -- So let me ask you; is Popper's argument against induction the kind of knowledge that cannot be explained to an a intelligent adult person using less than 1 page of text; not even in a simplified form?

Sometimes the original form of the argument is not the best one. For example, Gödel spent hundreds of pages proving something that kids today could express as "any mathematical theorem can be stored on computer as a text file, which is kinda a big integer in base 256". (Took him hundreds of pages, because people didn't have computers back then.) So maybe the book where Popper explained his idea is similarly not the most efficient way to explain the idea. Also, if an idea cannot be explained without pointing to the original source, that is a bit suspicious. On the other hand, of course, not everyone is skilled at explaining, so sometimes the text written by a skilled author has this advantage.


I believe that your belief in "refutation by criticism" as something that either is or isn't, but doesn't have "gradation of certainty", is so fundamentally wrong that it doesn't make sense to debate further. Because this is the whole point of why probabilistic reasoning, Bayes theorem, etc. is so popular on LW. (Because probabilities is what you use when you don't have absolute certainty, and I find it quite ironic that I am explaining this to someone who read orders of magnitude more of Popper than I did.)

Comment author: curi 17 November 2017 06:55:40PM *  0 points [-]

I don't even know what the abbreviation is supposed to mean. Seriously.

Do you even know the name of Popper's philosophy? Did you read the discussions about this that already happened on LW?

It seems that you're completely out of your depth, can't answer me, and don't want to make the effort to learn. You can't answer Popper, don't know of anyone or any writing that can, and are content with that. Your fellows here are the same way. So Popper goes unanswered and you guys stay wrong.

FYI Popper has lots of self-contained writing. Many of his book chapters are adapted from lectures, as you would know if you'd looked. I have written recommendations of which specific parts of Popper are best to read with brief comments on what they are about:


If you include links to other pages, I guess most people will not click them.

Everything you say in your post, about Popper issues, demonstrates huge ignorance, but there are no Paths Forward for you to get better ideas about this. The methodology dispute needs to be settled first, but people (including you) don't want to do that.

Comment author: Viliam 18 November 2017 03:53:25PM 2 points [-]

It seems that you're completely out of your depth, can't answer me, and don't want to make the effort to learn.

I generally agree with your judgment (assuming that the "effort to learn" refers strictly to Popper).

But before I leave this debate, I would like to point out that you (and Ilya) were able to make this (correct) judgment only because I put my cards on the table. I wrote, relatively shortly and without obfuscation, what I believe. Which allowed you to read it and conclude (correctly) "he is just an uneducated idiot". This allowed a quick resolution; and as a side effect I learned something.

This may or may not be ironically related to the idea of falsification, but at this moment I feel unworthy to comment on that.

Now I see two possible futures, and it is more or less your choice which one will happen:

Option 1:

You may try to describe (a) your beliefs about induction, (b) what you believe are LW beliefs about induction, and (c) why exactly are the supposed LW beliefs wrong, preferably with a specific example of a situation where following the LW beliefs would result in an obvious error.

This is the "high risk / high reward" scenario. It will cost you more time and work, and there is a chance that someone will say "oh, I didn't realize this before, but now I see this guy has a point; I should probably read more of what he says", but there is also a chance that someone will say "oh, he got Popper or LW completely wrong; I knew it was not worth debating him". Which is not necessarily a bad thing, but will probably feel so.

Yeah, there is also the chance that people will read your text and ignore it, but speaking for myself, there are two typical reasons why I would do that: either is text is written in a way that makes it difficult for me to decipher what exactly the author was actually trying to say; or the text depends on links to outside sources but my daily time budget for browsing internet is already spent. (That is why I selfishly urge you to write a self-contained article using your own words.) But other people may have other preferences. Maybe the best would be to add footnotes with references to sources, but make them optional for understanding the gist of the article.

Option 2:

You will keep saying: "guys, you are so confused about induction; you should definitely read Popper", and people at LW will keep thinking: "this guy is so confused about induction or about our beliefs about induction; he should definitely read the Sequences", and both sides will be frustrated about how the other side is unwilling to spend the energy necessary to resolve the situation. This is the "play it safe, win nothing" scenario. Also the more likely one.

Last note: Any valid argument made by Popper should be possible to explain without using the word "Popper" in text. Just like Pythagorean theorem is not about the person called Pythagoras, but about squares on triangles, and would be equally valid if instead it would be discovered or popularized by a completely different person; you could simply call it "squares-on-triangles theorem" and it would work equally well. (Related in Sequences: "Guessing the teacher's password"; "Argument Screens Off Authority".) If something is true about induction, it is true regardless of whether Popper did or didn't believe it.

Comment author: IlyaShpitser 17 November 2017 01:18:45AM *  3 points [-]

You should probably actually read Popper before putting words in his mouth.

According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it.

You found this claim in a book of his? Or did you read some Wikipedia, or what?

For example, this is a quote from the Stanford Encyclopedia of Philosophy:

Popper has always drawn a clear distinction between the logic of falsifiability and its applied methodology. The logic of his theory is utterly simple: if a single ferrous metal is unaffected by a magnetic field it cannot be the case that all ferrous metals are affected by magnetic fields. Logically speaking, a scientific law is conclusively falsifiable although it is not conclusively verifiable. Methodologically, however, the situation is much more complex: no observation is free from the possibility of error—consequently we may question whether our experimental result was what it appeared to be.

Thus, while advocating falsifiability as the criterion of demarcation for science, Popper explicitly allows for the fact that in practice a single conflicting or counter-instance is never sufficient methodologically to falsify a theory, and that scientific theories are often retained even though much of the available evidence conflicts with them, or is anomalous with respect to them.

You guys still do that whole "virtue of scholarship" thing, or what?

Comment author: Viliam 17 November 2017 10:31:11AM 0 points [-]

You guys still do that whole "virtue of scholarship" thing, or what?

Well, this specific guy has a job and a family, and studying "what Popper believed" is quite low on his list of priorities. If you want to provide a more educated answer to curi, go ahead.

Comment author: curi 14 November 2017 04:25:18AM 0 points [-]

if i were to provide an anti-induction article, what properties should it have?

apparently it should be different in some way than the ones already provided by Popper and DD, as individual book chapters.

one question is whether it should assume the reader has background knowledge of CR.

if so, it's easy, it'll be short ... and people here won't understand it.

if not, it'll be long and very hard to understand, and will repeat a lot of content from Popper's books.

what about a short logical argument about a key point, which doesn't explain the bigger picture? possible, but people hate those. they don't respond well to them. they don't just want their view destroyed without understanding any alternative. and anyway their own views are too vague to to criticize in a quick, logical way b/c whatever part you criticize, they can do without. there is no clear, essential, philosophical core they are attached to. if advocates of induction actually knew their own position, in exacting detail, inside and out, then you could quickly point out a logical flaw and they'd go "omg, that makes everything fall apart". but when you deal with people who aren't very clear on their own position, and who actually think all their beliefs are full of errors and you just have to muddle through and do your best ... then what kind of short argument will work?

Comment author: Viliam 16 November 2017 11:58:02PM *  2 points [-]

if i were to provide an anti-induction article, what properties should it have?

Regardless of the topic, I would say that the article should be easy to read, and relatively self-contained. For example, instead of "go read this book by Popper to understand how he defines X" you could define X using your own words, preferably giving an example (of course it's okay to also give a quote from Popper's book).

one question is whether it should assume the reader has background knowledge of CR.

I don't even know what the abbreviation is supposed to mean. Seriously.

Generally, I think that the greatest risk is people not even understanding what you are trying to say. If you include links to other pages, I guess most people will not click them. Aim to explain, not to convince, because a failure in explaining is automatically also a failure in convincing.

Maybe it would make sense for you to look at the articles that I believe (with my very unclear understanding of what you are trying to say) may be most relevant to your topic:
1) "Infinite Certainty" (and its mathy sequel "0 And 1 Are Not Probabilities"), and
2) "Scientific Evidence, Legal Evidence, Rational Evidence".

Because it seems to me that the thing about Popper and induction is approximately this...

Simplicio: "Can science be 100% sure about something?"
Popper: "Nope, that would mean that scientists would never change their minds. But they sometimes do, and that is an accepted part of science. Therefore, scientists are never 100% sure of their theories."
Simplicio: "Well, if they can't prove anything with 100% certainty, why don't we just ignore them completely? It's just another opinion, right?"
Popper: "Uhm... wait a minute... scientists cannot prove anything, but they can... uhm... disprove things! Yeah, that's what they do; they make many theories, they disprove most of them, and the one that keeps surviving is the official winner, for the moment. So it's not like the scientists proved e.g. the theory of relativity, but rather that they disproved all known competing theories, and failed to disprove the theory of relativity (yet)."

To which I would give the following objection:

1) How exactly could it be impossible to prove "X", and yet possible to disprove "not X"? If scientists are able to falsify e.g. the hypothesis that "two plus two does not equal four", isn't it the same as proving the hypothesis that "two plus two equals four"?

I imagine that the typical situation Popper had in mind included a few explicit hypotheses, e.g. A, B, C, and then a remaining option "something else that we did not consider". So he is essentially saying that scientists can experimentally disprove e.g. B and C, but that's not the same as proving A. Instead, they proved "either A, or something else that we did not consider, but definitely neither B nor C". Shortly: B and C were falsified, but A wasn't proven. And as long as there remains an unspecified category "things we did not consider", there is always a chance that A is merely an approximate solution, and the real solution is still unknown.

But it doesn't always have to be like this. Especially in math. But also in real life. Consider this:

According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it. And recently, theory of relativity was indeed falsified by an experiment. Does it mean we should stop teaching the theory of relativity, because now it was properly falsified?

With the benefit of hindsight, now we know there was a mistake in the experiment. But... that's exactly my point. The concepts of "proving" and "falsifying" are actually much closer than Popper probably imagined. You may have a hypothesis "H", and an experiment "E", but if you say that you falsified "H", it means you have a hypothesis "F" = "the experiment E is correct and falsifies the theory H". To falsify H by E is to prove F; therefore if F cannot be scientifically proven, then H cannot be scientifically falsified. Proof and falsification are not two fundamentally different processes; they are actually two sides of the same coin. To claim that the experiment E falsifies the hypothesis H, is to claim that you have a proof that "the experiment E falsifies the hypothesis H"... and the usual interpretation of Popper is that there are no proofs in science.

The answer generally accepted on LessWrong, I guess, is that what really happens in science is that people believe theories with greater and greater probability. Never 100%. But sometimes with a very high probability instead, and for most practical purposes such high probability works almost like certainty. Popper may insist that science is unable to actually prove that moon is not made of cheese, but the fact is that most scientists will behave as if they already had such proof; they are not going to keep an open mind about it.


Short version: Popper was right about inability to prove things with 100% certainty, but then he (or maybe just people who quote him) made a mistake of imagining that disproving things is a process fundamentally different from proving things, so you can at least disprove things with 100% certainty. My answer is that you can't even disprove things with probability 100%, but that's okay, because the "100%" part was just a red herring anyway; what actually happens in science is that things are believed with greater probability.

Comment author: curi 13 November 2017 01:35:17AM 0 points [-]

you missed the intended point about representatives. the point is that anyone takes any responsibility for the ideas they believe are true. the point is e.g. that anyone be available to answer questions about some idea. if the idea has no representatives in the sense of people who think it's good and answer questions about it, then that's problematic. then it's hard to learn and there's no one to improve or advocate it.

If you find that Sequences say "A" and truth is actually "B", what you can do is write an article on LW explaining why "B" is true.

And then people don't like me, b/c i'm a heretic who denies induction, so they ignore it. when there is no mechanism for correcting errors, what you end up with is bias: people decide to pay attention, or not, according to social status, bias, etc.

No matter how much I am told "X", no matter how much I in theory agree with "X", if I pay enough attention, I find myself going against "X" all the time.

For all X? E.g. "don't murder"? This part isn't clear.

Do you mean some hypothetical ideal of reason, or how smart but imperfect people actually do it?

The tradition of reason deals with both. It offers some guiding principles and ideals, as well as practical guidance, rules of thumb, tips, etc. People have knowledge of both of these.

Really? What is your opinion on the existence of atoms, or theory of relativity? I mean, the Einstein guy is just some unimportant rando; so did you develop the whole theory on your own?

I am familiar with some science and able to make some judgements about scientific arguments myself. Especially using resources like asking questions of physicists I know and using books/internet. I don't helplessly take people's words for things; i seek out explanations at the level of detail i'm interested in and make a judgement. And science is an interest of mine.

I have no criticism of the atomic theory, no objection to it. I know some stuff about it and I agree. I don't know of any contrary position that's any good. I'm convinced by the reasoning, not the prestige of the reasoners.

I didn't personally do all the experiments. Why should I? I don't accept an experiment merely b/c the person who did it had a PhD, but I don't automatically reject it either. I make a judgement about the experiment (or idea) instead of about the person's credentials.

I paid attention to physics, initially, because I found the arguments in the book The Fabric of Reality high quality and interesting. The book looked interesting to me so I read the opening paragraphs online and I thought they were good so I got the book. I didn't look for the book with the most prestigious author. I don't see what these historical detail matter, but you asked about them. Physics is important (we live in the physical world; we're made of atoms; we move; etc) and worthy of interest (though others are welcome to pursue other matters).

tl;dr: I won't take Einstein's word for it, but I can be impressed by his reasoning.

yet I find mistakes in the very first paragraph of the very first article

let's not jump to conclusions before discussing the matter. we disagree, or there is a misunderstanding.

Comment author: Viliam 13 November 2017 11:28:02PM 0 points [-]

And then people don't like me, b/c i'm a heretic who denies induction, so they ignore it.

Have you tried posting here an article about why induction is wrong? Preferably starting with an explanation of what you mean by "induction", just to make sure we are all debating the same thing.

Of course there is a chance that people will ignore the article, but I would be curious to learn e.g. why evolution gave so many organisms the ability of reinforcement learning, if the fundamental premise of reinforcement learning -- that things in future are likely to be similar to things in the past -- is wrong.

This part isn't clear.

(Yeah, that's me writing at midnight, after my daughter finally decides to go sleep. Sorry for that.)

What I mean was that for me personally, the greatest obstacle in "following reason" is not the reasoning part, but rather the following part. (Using the LW lingo, the greatest problem is not epistemic rationality, but instrumental rationality.) I feel quite confident that I am generally good at reasoning, or at least better than most of the population. What I have problem is to actually follow my own advice. Therefore, instead of developing smarter and smarter arguments, I rather wish to become better at implementing the things I already know.

And I suspect this is the reason why CFAR focuses on things like "trigger-action planning" et cetera, instead of e.g. publishing articles analysing the writings of Popper. The former simply seems to provide much more value than the latter.

Sometimes the lessons seem quite easy -- the map is not the territory; make sure you communicate meaning, not just words; be open to changing your mind in either direction; etc -- yet even after years of trying you are still sometimes doing it wrong. People enjoy "insight porn", but what they need is practicing the boring parts until they become automatic.

I don't accept an experiment merely b/c the person who did it had a PhD, but I don't automatically reject it either.

But do you privilege the hypothesis, if you heard it from a person with PhD?

Oh, I guess this may be another thing that I rarely find outside of LW: reasoning in degrees of gray, instead of black and white. I am not asking whether you take each Einstein's word as sacred. I am asking whether you increase the probability of something, if you learn that Einstein said so.

Comment author: Viliam 13 November 2017 12:03:41AM *  0 points [-]

In my understanding, there’s no one who speaks for LW, as its representative, and is responsible for addressing questions and criticisms. LW, as a school of thought, has no agents, no representatives – or at least none who are open to discussion.

As some have already said, this is considered a feature, not a bug. We do not care (or try not to care) about "what is the LW way?". Instead we (try to) focus on "how is it, really?". To quote Eliezer, who is closest to being the representative of LW:

Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.

So, it feels like you would like to have a phone number of the Great Teacher, to ask him about the color of the sky. While this site is -- if I may continue the metaphor -- trying to teach you how to actually look at the sky, and explaining how the human eye perceives colors.

Suppose I wrote some criticisms of the sequences, or some Bayesian book. Who will answer me? Who will fix the mistakes I point out, or canonically address my criticisms with counter-arguments? No one.

If you find that Sequences say "A" and truth is actually "B", what you can do is write an article on LW explaining why "B" is true. (Pointing out that Sequences say "A" is optional; I think it would be better done afterwards, so that people can debate "B" independently. But do as you wish.)

It may happen is that different people will give different opinions. But then you can let them argue against each other.

So how is progress to be made?

Here I may be just talking about myself, but I seek progress at a completely different place. I don't care that much about playing with words, which many intelligent people, including you, seem to be so fond of. I see humans, including myself, as deeply imperfect beings. No matter how much I am told "X", no matter how much I in theory agree with "X", if I pay enough attention, I find myself going against "X" all the time. Thus, instead of having yet another debate about virtues of "X", I would rather spend my attention trying to practice "X". Because as long as there is a huge gap between what I profess and what I actually do, it does not matter much whether I profess correct ideas. Actually, talking about rationality, it may be even worse. The ideas I profess can be not only right or wrong, but possibly also irrelevant, or confused, or utterly meaningless.

You linked a website. Let me just look at the first article: "Why is Reason Important?". You talk about something called "Reason". Do you mean some hypothetical ideal of reason, or how smart but imperfect people actually do it? Oh wait, let me ask even more important question: Are you even aware that there is a distinction between these two? Because the article does not reflect that.

Still reading the first paragraph: "Reason also rejects the idea that authorities can or should tell us what the truth is. Instead, we should judge ideas ourselves, and based on the content of the idea not the person who said it. Even if I am the person who said an idea, and I have a PhD, that doesn't count for anything"... Really? What is your opinion on the existence of atoms, or theory of relativity? I mean, the Einstein guy is just some unimportant rando; so did you develop the whole theory on your own? Did you do all the relevant experiments to confirm that atoms do indeed exist? Wait, I have a more important question: Even if you have personally verified the theory of relativity, why did you even decide that verifying this theory is worth your time? I mean, (1) there are millions of possible theories, and you certainly cannot verify all of them, and (2) the fact that Einstein and a few others believe in some specific theory "X" means absolutely nothing before you verified it for yourself, right? So, why did you even choose to pay attention to the theory of relativity, if Einstein's words mean nothing, and there were million other potential theories competing for your attention?

...this was just an example of what I meant by "playing with words". You wrote a whole website of arguments that I guess seem convincing to you, and yet I find mistakes in the very first paragraph of the very first article. If you can imagine that this is how I feel about almost each paragraph of each article on your website, you can understand why I am unimpressed, and why I don't want to go this way.

Which challenges are addressed? All of them.

Okay, I am curious: did someone already tell you something similar to what I just did? If yes, could you please give me a pointer to how it was addressed?

View more: Next