Robin Hanson's posts from the AI Foom debate are not included in the list of all articles. Covering only Yudkowsky's side of the debate would be a little strange for readers I think. Should we feature Hanson's posts (and others who participated in the debate) during that time as well?
If a biopsy is the next step in diagnosing breast cancer after a positive mammogram, then we shouldn't perform mammograms on anyone it still wouldn't be worth biopsying should their mammogram turn up positive.
Yes, that's exactly right.
And although I'm having a hard time finding a news article to verify this, someone informed me that the official breast cancer screening recommendations in the US (or was it a particular state, perhaps California?) were recently modified such that it is now not recommended that women younger than 40 (50?) receive regular screening. The young woman who informed me of this change in policy was quite upset about it. It didn't make any sense to her. I tried to explain to her how it actually made good sense when you think about it in terms of base rates and expected values, but of course, it was no use.
But to return to the issue clinical implications, yes: if a woman belongs to a population where the result of a mammogram would not change our decision about whether a biopsy is necessary, then probably she shouldn't have the mammogram. I suspect that this line of reasoning would sound quite foreign to most practicing doctors.
I agree with previous comments about publishing in journals being an important status issue, but I think there is other value as well which is being ignored. For all of its annoyances and flaws, one good thing about peer review is that it really makes your paper better. When you submit a pretty good paper to a journal and get back the "revise and resubmit" along with the detailed list of criticisms and suggestions, then by the time the paper actually makes it into the journal, chances are that it will have become a really good paper.
But to return to the issue of papers being taken more seriously when published in a journal, I think that this view is actually quite justified. For researchers who are not already very knowledgeable in the precise area that is the topic for a given paper, whether or not the paper has withstood peer review is a very useful heuristic cue toward how much weight you should place on it. Basically, peer review keeps the author honest. An author posting a paper on his website can say pretty much whatever he wants. One of the purposes of peer reviewers is to make sure that the author isn't making unreasonable claims, mischaracterizing theoretical positions, "reviewing" the relevant previous literature in a grossly selective way, etc. Like I said, if someone is already very familiar with the area, then they can evaluate these aspects of the paper for themselves. But if you'd like to communicate your position to a wider academic audience, peer review can help carry your paper a longer way.
I got to shadow some breast physicians last month, and although it's sort of off topic I think I gained some insight as to why so many doctors get this question wrong.
Which is because it's very different from any situation they ever come across in clinical practice. Guidelines are to screen people with mammography and examination; anyone who comes up as suspicious on those two tests then gets a biopsy. No one gets diagnosed with breast cancer from a mammogram alone, the progression from mammogram on to the next step is hard-coded into a pre-determined algorithm, and so the question of "This woman got a positive on the mammogram; does she have cancer?" never comes up. A question that does come up a lot is a woman panicking because she got a positive mammogram and demanding to know whether she has breast cancer, and the inevitable answer is "We'll need to do more tests, but don't worry too much yet because most of these things are false positives."
So the doctors involved know that most real mammogram results are false positives, they know how to diagnose breast cancer based on the combination of tests they actually do, they just can't do Bayesian math problems when given probabilities. This is kind of interesting if you're curious about their intelligence but as far as I know doesn't really affect clinical care.
As far as the take-home practical message goes, on my reading it was never about how well doctors could "diagnose cancer" per se based on mammogram results--rather, the reason we ask about P(cancer | positive) is because it ought to inform our decision about whether a biopsy is really warranted. If a healthy young woman from a population with an exceedingly low base rate for breast cancer has a positive mammogram, the prior probability of her having cancer may still be low enough that there might actually be negative expected value in following up with a biopsy; after all, let's not forgot that a biopsy is not a trivial procedure and things do sometimes go wrong.
So I think this actually does have some implication for real-world clinical care: we ought to question whether it is wise to automatically follow up all positive mammograms with biopsies. Maybe it is, and maybe it isn't, but I don't think we should take the question for granted as appears to be the case.
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology.
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
Is that basically right?
That is the general idea (but incomplete).
The reason we behave as if it's true is that it's the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn't want to act on an idea that we (thought we) saw a mistake in, over one we don't think we see any mistake with -- we should use what (fallible) knowledge we have.
A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general).
One reason that not being criticized isn't a justification is that saying it is gets you a regress problem. So let's not say that! The other reason is: what would that be adding as compared with not saying it? It's not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those).
Terminology isn't terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don't like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.
Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?
I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it's not clear that this discussion warrants an entirely new topic.
Terminology isn't terribly important . . . If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.
Okay. So what is really at issue here is whether or not the Popperian conception of a good theory, whatever we call that, leads to regress problems similar to those experienced by "justificationist" systems.
It seems to me that it does! You claim that the particular feature of justificationist systems that leads to a regress is their reliance on positive arguments. Popper's system is said to avoid this issue because it denies positive arguments and instead only recognizes negative arguments, which circumvents the regress issue so long as we accept modus tollens. But I claim that Popper's system does in fact rely on positive arguments at least implicitly, and that this opens the system to regress problems. Let me illustrate.
According to Popper, we ought to act on whatever theory we have that has not been falsified. But that itself represents a positive argument in favor of any non-falsified theory! We might ask: okay, but why ought we to act only on theories which have not been falsified? We could probably come up with a pretty reasonable answer to this question--but as you can see, the regress has begun.
When you have exactly one non-refuted theory, you go with that.
The other cases are more complicated and difficult to understand.
Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede?
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first.
[option 1] since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other.
But some are criticized and some aren't.
[option 2] conjecture that best weathered the criticisms you were able to muster
But how is that to be judged?
No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.).
Or is it something radically different from these two altogether?
Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most.
Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.
If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?
Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can't reasonably guarantee that I will not have later objections as well before we've even had the discussion!
So let me see if I'm understanding this correctly. What we are looking for is the one conjecture which appears to be completely impervious to any criticism that we can muster against it, given our current knowledge. Once we have found such a conjecture, we -- I don't want to say "assume that it's true," because that's probably not correct -- we behave as if it were true until it finally is criticized and, hopefully, replaced by a new conjecture. Is that basically right?
I'm not really seeing how this is fundamentally anti-justificationist. It seems to me that the Popperian epistemology still depends on a form of justification, but that it relies on a sort of boolean all-or-nothing justification rather than allowing graded degrees of justification. For example, when we say something like, "in order to make a decision, we need to have a guiding theory which is currently impervious to criticism" (my current understanding of Popper's idea, roughly illustrated), isn't this just another way of saying: "the fact that this theory is currently impervious to criticism is what justifies our reliance on it in making this decision?"
In short, isn't imperviousness to criticism a type of justification in itself?
Could you explain how a Popperian disputes such an assertion? [(50% probability of humanity surviving the next century)]
e.g. by pointing out that whether we do or don't survive depends on human choices, which in turn depends on human knowledge. And the growth of knowledge is not predictable (exactly or probabilistically). If we knew its contents and effects now, we would already have that knowledge. So this is not prediction but prophecy. And prophecy has build in bias towards pessimism: because we can't make predictions about future knowledge, prophets in general make predictions that disregard future knowledge. These are explanatory, philosophical arguments which do not rely on evidence (that is appropriate because it is not a scientific or empirical mistake being criticized). No corresponding calculation is made at all.
You ask about how Popperians make decisions if not with such calculations. Well, say we want to decide if we should build a lot more nuclear power plants. This could be taken as gambling with a lot of lives, and maybe even all of them. Of course, not doing it could also be taken as a way of gambling with lives. There's no way to never face any potential dangers. So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
You seem to be arguing that Bayesianism is wrong, which is a very different thing.
I think it's wrong as an epistemology. For example because induction is wrong, and the notion of positive support is wrong. Of course Bayes' theorem is correct, and various math you guys have done is correct. I keep getting conflicting statements from people about whether Bayesianism conflicts with Popperism or not, and I don't want to speak for you guys, nor do I want to discourage anyone from finding the shared ideas or discourage them from learning from both.
Would you never take a bet?
Bets are made on events, like which team wins a sports game. Probabilities are fine for events. Probabilities of the truth of theories is problematic (b/c e.g. there is no way to make them non-arbitrary). And it's not something a fallibilist can bet on because he accepts we never know the final truth for sure, so how are we to set up a decision procedure that decides who won the bet?
Would never take an action that could possibly be bad and could possibly be good, which requires weighing two uncertain outcomes?
We are not afraid of uncertainty. Popperian epistemology is fallibilist. It rejects certainty. Life is always uncertain. That does not imply probability is the right way to approach all types of uncertainty.
This brings me back to my initial query: give a specific case where Popperian reasoning diverges from Bayesian reasoning, explain why they diverge, and explain why Bayesianism is wrong. Explain why Bayesian's willingness to bet does harm. Explain why Bayesians are slower than Popperians at coming to the same conclusion. Whatever you want.
Bayesian reasoning diverges when it says that ideas can be positively supported. We diverge because Popper questioned the concept of positive support, as I posted in the original text on this page, and which no one has answered yet. The criticism of positive support begins by considering what it is (you tell me) and how it differs from consistency (you tell me).
So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't understand all the references to longer arguments being made there, I would criticize it and ask for the details. Meanwhile the "no" answer and its variants will get refuted by criticism. Sometimes entire infinite categories of conjectures will be refuted by a criticism, e.g. the anti-nuclear people might start arguing with conspiracy theories. By providing a general purpose argument against all conspiracy theories, I could deal with all their arguments of that type. Does this illustrate the general idea for you?
Almost, but you seem to have left out the rather important detail of how actually make the decision. Based on the process of criticizing conjectures you've described so far, it seems that there are two basic routes you can take to finish the decision process once the critical smoke has cleared.
First, you can declare that, since there is no such thing as confirmation, it turns out that no conjecture is better or worse than any other. In this way you don't actually make a decision and the problem remains unsolved.
Second, you can choose to go with the conjecture that best weathered the criticisms you were able to muster. That's fine, but then it's not clear that you've done anything different from what a Bayesian would have done--you've simply avoided explicitly talking about things like probabilities and priors.
Which of these is a more accurate characterization of the Popperian decision process? Or is it something radically different from these two altogether?
I think this has at least some truth to it. Keep in mind that there is an IRC channel, though I am not sure how much it gets used. I would endorse the creation of subreddit for rationalists (which would encourage all manner of offtopic discussions) as opposed to a subreddit for discussion of rationality.
Two corrections: I think you mean 'bonding' in the title, and I think you mean 'price' not 'prise' in the 2nd to last paragraph.
Might it be a good idea to feature the IRC channel more centrally on the website? Eliezer's concern notwithstanding, if I'm going to kill time anyway (and believe me, I'm going to anyway), it might be nice to do so in a busy LW IRC room. I could think of less productive things to do for an hour.
we hate not having outgroups because they serve so well to help reassure us in our ingroup values.
This hypothesis would explain outgroup/ingroup activity only in contexts where there are associated values. That doesn't fit in the example in question.
Sure there are associated values. By implying that a particular out-group is "ugly, smelly, no friends, socially unacceptable, negative, aggressive," etc. etc., you simultaneously imply that your in-group is none of those things. You elevate the in-group by derogating the out-group. Presumably you and your in-group value not having all of those negative traits.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think you're taking the fundamentally wrong approach. Rather than trying to simply predict when you'll be sleepy in the near-term, you should try to actively get your sleeping patterns under control.