All of JonathanLivengood's Comments + Replies

If the AI actually ends up with strong evidence for a scenario it assigned super-exponential improbability, the AI reconsiders its priors and the apparent strength of evidence rather than executing a blind Bayesian update, though this part is formally a tad underspecified.

I would love to have a conversation about this. Is the "tad" here hyperbole or do you actually have something mostly worked out that you just don't want to post? On a first reading (and admittedly without much serious thought -- it's been a long day), it seems to me that this... (read more)

0Eliezer Yudkowsky
That hyperbole one. I wasn't intending the primary focus of this post to be on the notion of a super-update - I'm not sure if that part needs to make it into AIs, though it seems to me to be partially responsible for my humanlike foibles in the Horrible LHC Inconsistency. I agree that this notion is actually very underspecified but so is almost all of bounded logical uncertainty.

Ah, I see that I misread. Somehow I had it in my head that you were talking about the question on the philpapers survey specifically about scientific realism. Probably because I've been teaching the realism debate in my philosophy of science course the last couple of weeks.

I am, however, going to disagree that I've given a too strong characterization of scientific realism. I did (stupidly and accidentally) drop the phrase "... is true or approximately true" from the end of the second commitment, but with that in place, the scientific realist real... (read more)

We do pretty well, actually (pdf). (Though I think this is a selection effect, not a positive effect of training.)

I'm guessing that you don't really know what anti-realism in philosophy of science looks like. I suspect that most of the non-specialist philosophers who responded also don't really know what the issues are, so this is hardly a knock against you. Scientific realism sounds like it should be right. But the issue is more complicated, I think.

Scientific realists commit to at least the following two theses:

(1) Semantic Realism. Read scientific theories literally. If one theory says that space-time is curved and there are no forces, while the other says that spa... (read more)

3Rob Bensinger
Jonathan, Anti-Realism here isn't restricted to the view in philosophy of science. It's also associated with a rejection of the correspondence and deflationary theories of truth and of external-world realism. I'm currently somewhere in between a scientific realist and a structural realist, and I'm fine with classifying the latter as an anti-realism, though not necessarily in the sense of Anti-Realism Chalmers coined above to label one of the factors. Your characterization of scientific realism, though, is way too strong. "In every case" should read "In most cases" or "In many cases", for Epistemic Realism. That's already a difficult enough view to defend, without loading it with untenable absolutism. My main concern with Anti-Realists isn't that they're often skeptical about whether bosons exist; it's that they're often skeptical about whether tables exist, and/or about whether they're mind-independent, and/or about whether our statements about them are true in virtue of how the world outside ourselves is.

Are the meetings word of mouth at this point, then? When is the next meeting planned?

0Manfred
Oh, sorry, by "we" I meant the constituent people, not the group.

I have had some interest, but I never managed to attend any of the previous meetups. I don't know if I will find time for it in the future.

That question raises a bunch of interpretive difficulties. You will find the expression sine qua non, which literally means "without which not," in some medieval writings about causation. For example, Aquinas rejects mere sine qua non causality as an adequate account of how the sacraments effect grace. In legal contexts today, that same expression denotes a simple counterfactual test for causation -- the "but for" test. One might try to interpret the phrase as meaning "indispensable" when Aquinas and other medievals use it and... (read more)

I'll say it again: there is no point in criticising philosophy unless you have (1) a better way of (2) answering the same questions.

Criticism could come in the form of showing that the questions shouldn't be asked for one reason or another. Or criticism could come in the form of showing that the questions cannot be answered with the available tools. For example, if I ran into a bunch of people trying to trisect an arbitrary angle using compass and straight-edge, I might show them that their tools are inadequate for the task. In principle, I could do tha... (read more)

-5Peterdjones

That is being generous to Hume, I think. The counterfactual account in Hume is an afterthought to the first of his two (incompatible) definitions of causation in the Enquiry:

Similar objects are always conjoined with similar. Of this we have experience. Suitably to this experience, therefore, we may define a cause to be an object, followed by another, and where all the objects similar to the first are followed by objects similar to the second. Or in other words where, if the first object had not been, the second never had existed.

As far as I know, this ... (read more)

1IlyaShpitser
I agree that Hume was not thinking coherently about causality, but the credit for the counterfactual definition still ought to go to him, imo. Are you aware of an earlier attempt along these lines?

Wow! Thanks for the Good Thinking link. Now I won't have to scan it myself.

It might help if you told us which of the thousands of varieties of Bayesianism you have in mind with your question. (I would link to I.J. Good's letter on the 46656 Varieties of Bayesians, but the best I could come up with was the citation in Google Scholar, which does not make the actual text available.)

In terms of pure (or mostly pure) criticisms of frequentist interpretations of probability, you might look at two papers by Alan Hajek: fifteen arguments against finite frequentism and fifteen arguments against hypothetical frequentism.

In terms of Bayesia... (read more)

3Axel
Would this be I.J. Good's letter on the 46656 Varieties of Bayesians? (I'm practicing my google-fu)

Oh, the under-specification! ;)

Seems to me that curing cancer swamps out everything else in the story. Supposing that World War 2 was entirely down to Hitler, the casualties came to about 60-80 million. By comparison, back of the envelope calculations suggest that around 1.7 million people die from cancer each year* in the U.S., E.U., and Japan taken together. See the CDC numbers and the Destatis numbers (via Google's public data explorer) for the numbers that I used to form a baseline for the 1.7 million figure.

That means that within a generation or two, the cancer guy would have saved... (read more)

5TrE
It's also conceivable that with his compelling story of kidney failure with his life saved by transplantation, Hitler gets admitted to art school, creating beautiful landscape paintings for the rest of his life. At the same time, the person who cures cancer may also accidentally create a virus that destroys every single living cell on earth within one week after its accidental release into the environment. The only person surviving would be a brain emulation prototype which copies itself, rebuilding human society such that nobody dies or feels pain anymore.

Interesting piece. I was a bit bemused by this, though:

In fact Plato wrote to Archimedes, scolding him about messing around with real levers and ropes when any gentleman would have stayed in his study or possibly, in Archimedes’ case, his bath.

Problematically for the story, Plato died around 347 BCE, and Archimedes wasn't born until 287 BCE -- sixty years later.

I'm not sure what you count as violence, but if you look at the history of the suffrage movement in Britain, you will find that while the movement started out as non-violent, it escalated to include campaigns of window-breaking, arson, and other destruction of property. (Women were also involved in many violent confrontations with police, but it looks like the police always initiated the violence. To what degree women responded in kind and whether that would make their movement violent is unclear to me.) The historians usually describe the vandalism campai... (read more)

When a certain episode of Pokemon contained contained a pattern of red and blue flashes capable of inducing epilepsy, 685 children were taken to hospitals, most of whom had seen the pattern not on the original Pokemon episode but on news reports showing the episode which had induced epilepsy.

At the very least, this needs a citation or two, since the following sources cast doubt on the story as presented:

WebMD's account

CNN's account

Snopes' account

And CSI's account, which includes the following:

At about 6:51, the flashing lights filled the screens. By 7

... (read more)

Do you have worked out numbers (in terms of community participation, support dollars, increased real-world violence, etc.) comparing the effect of having the censorship policy and the effect of allowing discussions that would be censored by the proposed policy? The request for "Consequences we haven't considered" is hard to meet until we know with sufficient detail what exactly you have considered.

My gut thinks it is unlikely that having a censorship policy has a smaller negative public relations effect than having occasional discussions that vio... (read more)

What is the evidence that 2 is out? Suppose there are five available effective means to some end. If I take away one of them, doesn't that reduce the availability of effective means to that end? Is the idea supposed to be that the various means are all so widely available that overall availability of means to the relevant end is not affected by eliminating (or greatly reducing) availability of one of them? Seems contentious to me. Moreover, what you say after the claim that 2 is out seems rather to support the claim that 2 is basically correct: poison, bom... (read more)

0ewbrownv
I think you have a point here, but there's a more fundamental problem - there doesn't seem to be much evidence that gun control affects the ability of criminals to get guns. The problem here is similar to prohibition of drugs. Guns and ammunition are widely available in many areas, are relatively easy to smuggle, and are durable goods that can be kept in operation for many decades once acquired. Also, the fact that police and other security officials need them means that they will continue to be produced and/or imported into an area with even very strict prohibition, creating many opportunities for weapons to leak out of official hands. So gun control measures are much better at disarming law-abiding citizens than criminals. Use of guns by criminals does seem to drop a bit when a nation adopts strict gun control policies for a long period of time, but the fact that the victims have been disarmed also means criminals don't need as many guns. If your goal is disarming criminals it isn't at all clear that this is a net benefit.

A more interesting number for the gun control debate is the percentage of households with guns. That number in the U.S. has been declining -- pdf, but it is still very high in comparison with other developed nations.

However, exact comparisons of gun ownership rates internationally are tricky. The data is often sparse or non-uniform in the way it is collected. The most consistent comparisons I could find -- and I'd love to see more recent data -- were from the 1989 and 1992 International Crime Surveys. The numbers are reported in this paper on gun ownership... (read more)

That's a good point. Looks like an oversight on my part. I was probably overly focused on the formal side that aims to describe normatively correct reasoning. (Even doing that, I missed some things, e.g. decision theory.) I hope to write up a more detailed, concrete, and positive proposal in the next couple of days. I will include at least one -- and probably two -- courses that look at failures of good reasoning in that recommendation.

1jsalvatier
I look forward to it :) Another thing that comes to mind, is that if you're advising the curriculum committee and not directly in charge, you may want to strategize about how best to convince them to take a more lesswrongy attitude. Things that spring to mind: * getting multiple people to say similar things * making the same argument repeatedly and in private with members of the committee * getting a speaker (luke?) to come in and make the case * finding articles that make the same points you do

I don't want to have a dispute about words. What I mean when I talk about the logic curriculum in my department, I have in mind the broader term. The entry-level course in logic does have some probability/statistics content, already. There isn't a sub-program in logic, like a minor or anything, that has a structural component for anyone to fight about. I would like to see more courses dedicated to probability and induction from a philosophical perspective. But if I get that, then I'm not going to fight about the word "logic." I'd be happy to take a more generic label, like CMU's logic, computation, and methodology.

0jsalvatier
Ah, okay, that makes sense then. I think part of why I think I'm confused is that none of the courses you proposed are focused on psychology (heuristics and biases being the standard recommendation). Any reason for that?

Because I see those things as part of logic. As I see it, logic as typically taught in mathematics and philosophy departments from 1950 on dropped at least half of what logic is supposed to be about. People like Church taught philosophers to think that logic is about having a formal, deductive calculus, not about the norms of reasoning. I think that's a mistake. So, in reforming the logic curriculum, I think one goal should be to restore something that has been lost: interest in norms of reasoning across the board.

0jsalvatier
Hmm, is that best accomplished by trying to reappropriate the word 'logic'? Mathematicians, philosophers etc. seem like they have a pretty firm idea about they mean by 'logic' and going against is probably hard. Trying to get a Heuristics and Biases or statistics course into the logic curriculum seems like it would get a lot of pushback. Can the word 'logic' itself be that valuable? Why not pick a new word?

I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:

(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.

I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the ... (read more)

0Richard_Kennaway
Yes, it's (2) that I'm interested in. Is there some small set of axioms, on the basis of which you can set up causal reasoning, as has been done for probability theory? And which can then be used as a gold standard against which to measure our untutored fumblings that result from (1)?

That seems like a very different question than, say, how humans actually came by their tendency to attribute causation. For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.

What you seem to be after is very dif... (read more)

2Richard_Kennaway
This is not an explanation: it is simply saying "evolution did it". An explanation should exhibit the mechanism whereby the concept is acquired. That is one way of presenting the thought experiment. Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human. Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, "we design it that way" is no more of an answer than "God made us that way" or "evolution made us that way". Consider the question in the spirit of Jaynes' use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl's works presuppose an existing knowledge of causation, but do not tell us how to first acquire it. That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?

When you ask (in your koan) how the process of attributing causation gets started, what exactly are you asking about? Are you asking how humans actually came by their tendency to attribute causation? Are you asking how an AI might do so? Are you asking about how causal attributions are ultimately justified? Or what?

0Richard_Kennaway
I think these are all aspects of the same thing: how might an intelligent entity arrive at correct knowledge about causes, starting from a lack of even the concept of a cause?

What do you think about debates about which axioms or rules of inference to endorse? I'm thinking here about disputes between classical mathematicians and varieties of constructivist mathematicians), which sometime show themselves in which proofs are counted as legitimate.

I am tempted to back up a level and say that there is little or no dispute about conditional claims: if you give me these axioms and these rules of inference, then these are the provable claims. The constructivist might say, "Yes, that's a perfectly good non-constructive proof, but... (read more)

1Richard_Kennaway
First-, or possibly second-order predicate logic has swept the board. Constructivism is just a branch of mathematics. Everyone understands the difference between constructive and non-constructive proofs, and while building logical systems in which only constructive proofs can be expressed is a useful activity, I think there are not many mathematicians who really believe that a non-constructive proof is worthless. There is some ambivalence toward such things as the continuum hypothesis and the axiom of choice, but those issues never seem to have any practical import outside of their own domain.

Though it doesn't yet exist, if such a course sounds as helpful to you as it does to me, then you could of course try to work with CFAR and other interested parties to try to develop such a course.

I am interested. Should I contact Julia directly or is there something else I should do in order to get involved?

Also, since you mention Alexander's book, let me make a shameless plug here: Justin Sytsma and I just finished a draft of our own introduction to experimental philosophy, which is under contract with Broadview and should be in print in the next year or so.

1lukeprog
I look forward to your book with Sytsma! Yes, contact Julia directly.
4loup-vaillant
So, we have a few alternatives: 1. No filters at all. 2. Full gating (if you didn't went through the prerequisite courses, you're out). 3. Instructor's approval. 4. Entry tests. 5. Big warnings about prerequisites. I think the best way is probably a mix: * If you took (and passed) the prerequisite courses, you can enter. * Otherwise, you pass the entry test (if available). * Above some threshold, you can enter. * Below some threshold, you're toast. * Between them, you need instructor approval. The idea is to make prerequisite courses optional, while keeping the actual proficiency of the prerequisite material mandatory.

I didn't realize the link I gave was not viewable: apologies for that. Also, wow. That PHYS 123 "page" is really embarrassingly bad.

I was going to say that the problem from the instructor's point of view is deciding whether the student really has the necessary background, but Desrtopa is probably right that some sort of testing system could be set up.

In one sense, I agree that there shouldn't be any gating. It is overly-paternalistic. Students should be allowed to risk taking advanced classes as long as they don't gripe about their failures later. But on the other hand, the actual result that I see in my classes is that many -- and here I mean maybe as many as half -- of the students i... (read more)

4Luke_A_Somers
If you're going to back off on the gating, you need to provide sufficient guidance to the students on what they will practically need to know that they can make an informed choice. I took a course in baroque music that went very badly. If I had known how much music theory I would have to have, and how much facility I would have to have with it, I would not have taken the course.

The content is informal logic: discourse analysis, informal fallacies (like ad hominem, ad populum, etc.). Depending on who teaches it, there might be some simple syllogistic logic or some translation problems.

I like the idea of requiring logic along with the intro course. I'll keep that one in mind.

I strongly agree with your comment. What concrete steps would you take to fix the problem? Are there specific classes you would add or things you would emphasize in existing classes? Are there specific classes that you would remove or things you would de-emphasize in existing classes?

0jsalvatier
If you agree here, I'm curious why you're focusing on reforming the logic curriculum? Why not focus on shifting resources from teaching logic to teaching the standard things recommended here (probability theory, heuristics and biases, psychology etc.).
7pragmatist
These would be the formal classes in my ideal philosophy curriculum: * Symbolic logic (sentential and predicate logic, some model theory) * Set theory and category theory * Mathematical logic (along the lines of your 454) * Scientific reasoning (elementary statistics, causal inference) * Probability theory and the philosophy of probability * Rational decision-making (decision theory, heuristics and biases) * Formal epistemology (Bayesian epistemology, confirmation theory, computational learning theory) * Some sort of "programming for philosophers" class, teaching basic programming but emphasizing the connections with the material they've learned in their logic classes.

You might be right that I'm reading too much into what you've written. However, I suspect (especially given the other comments in this thread and the comments on the reddit thread) that the reading "Philosophy is overwhelmingly bad and should be killed with fire," is the one that readers are most likely to actually give to what you've written. I don't know whether there is a good way to both (a) make the points you want to make about improving philosophy education and (b) make the stronger reading unlikely.

I'm curious: if you couldn't have your w... (read more)

6lukeprog
Yes; hopefully I can do better in my next post. One course I'd want in every philosophy curriculum would be something like "The Science of Changing Your Mind," based on the more epistemically-focused stuff that CFAR is learning how to teach to people. This course offering doesn't exist yet, but if it did then it would be a course which has people drill the particular skills involved in Not Fooling Oneself. You know, teachable rationality skills: be specific, avoid motivated cognition, get curious, etc. — but after we've figured out how to teach these things effectively, and aren't just guessing at which exercises might be effective. (Why this? Because Philosophy Needs to Trust Your Rationality Even Though It Shouldn't.) Though it doesn't yet exist, if such a course sounds as helpful to you as it does to me, then you could of course try to work with CFAR and other interested parties to try to develop such a course. CFAR is already working with Nobel laureate Saul Perlmutter at Berkeley to develop some kind of course on rationality, though I don't have the details. I know CFAR president Julia Galef is particularly passionate about the relevance of trainable rationality skills to successful philosophical practice. What about courses that could e.g. be run from existing textbooks? It is difficult to suggest entry-level courses that would be useful. Aaronson's course Philosophy and Theoretical Computer Science could be good, but it seems to require significant background in computability and complexity theory. One candidate might be a course in probability theory and its implications for philosophy of science — the kind of material covered in the early chapters of Koller & Friedman (2009) and then Howson & Urbach (2005) (or, more briefly, Yudwkosky 2005). Another candidate would be a course on experimental philosophy, perhaps expanding on Alexander (2012).

The head of your dissertation committee was a co-author with Glymour on the work that Pearl built on with Causality.

I was, in fact, aware of that. ;)

In the grand scheme of things, I may have had an odd education. However, it's not like I'm the only student that Glymour, Spirtes, Machery, and many of my other teachers have had. Basically every student who went through Pitt HPS or CMU's Philosophy Department had the same or deeper exposure to psychology, cognitive science, neuroscience, causal Bayes nets, confirmation theory, etc. Either that, or they got... (read more)

2lukeprog
Of course. I said it for the benefit of others. But I guess I should have said "As I'm sure you know..." I think you might be reading too much into what I've claimed in my article. I said things like: * "Not all philosophy is this bad, but much of it is bad enough..." (not, e.g. "most philosophy is this bad") * "you'll find that [these classes] spend a lot of time with..." (not, e.g., "spend most of their time with...") * "More X... less Y..." (not, e.g., "X, not Y") No, the link goes to the "Western Philosophy" section (see the URL), the first subsection of which happens to be Aristotle.

Provocative article. I agree that philosophers should be reading Pearl and Kahneman. I even agree that philosophers should spend more time with Pearl and Kahneman (and lots of other contemporary thinkers) than they do with Plato and Kant. But then, that pretty much describes my own graduate training in philosophy. And it describes the graduate training (at a very different school) received by many of the students in the department where I now teach. I recognize that my experience may be unusual, but I wonder if philosophy and philosophical training really ... (read more)

lukeprog170

But then, that pretty much describes my own graduate training in philosophy.

You did indeed have an unusual philosophical training. In fact, the head of your dissertation committee was a co-author with Glymour on the work that Pearl built on with Causality.

You seem to think that philosophical training involves a lot of Aristotelian ideas

Not really. Term logic is my only mention of Aristotle, and I know that philosophy departments focus on first-order logic and not term logic these days. Your training was not unusual in this matter. First-order logic ... (read more)

0Peterdjones
Excellent post overall. I particularly agree with this part. The project of regimenting philosophy to conform to someone's ideas of correctness or meaningfullness or worth isn't just objectionably illiberal, although it is, it is counterporductive, because you need some disciple that houses the weirdos. If none of them do, then those leftfield ideas are going to slip through the cracks.
5A1987dM
I second that.

Even that's not quite right. There is a tie for 5th place between Harvard and Pitt. The fact that Harvard is listed before Pitt appears to be due to lexicographical order.

That's an interesting point. How precise do you think we have to be with respect to feedbacks in the climate system if we are interested in an existential risk question? And do you have other uncertainties in mind or just uncertainties about feedbacks?

The first thing I thought on reading your reply was that insofar as the evidence supports positive feedbacks, the evidence also supports the claim that there is existential risk from climate change. But then I thought maybe we need to know more about how far away the next equilibrium is -- assuming there is o... (read more)

I really don't understand the row for climate change. What exactly is meant by "inference" in the data column? I don't know what you want to count as data, but it seems to me that the data with respect to climate change include increasingly good direct measurements of temperature and greenhouse gas concentrations over the last hundred years or so, whatever goes into the basis of relevant physical and chemical theories (like theories of heat transfer, cloud formation, solar dynamics, and so forth), and measurements of proxies for temperature and g... (read more)

1Stuart_Armstrong
The uncertainties within the models are swamped by uncertainties outside the model - ie whether feedbacks are properly accounted for or not. I agree that "inference" on its own is very odd. I would have put "inference and observations (delayed feedback)".

Yeah, I still think you're talking past one another. Wasserman's point is that something being a 95% confidence interval deductively entails that it has the relevant kind of frequentist coverage. That can no more fail to be true than 2+2 can stop being 4. The null, then, ought to be simply that these are really 95% confidence intervals, and the data then tell against that null by undermining a logical consequence of the null. The data might be excellent evidence that these aren't 95% confidence intervals. Of course, figuring out exactly why they aren't is ... (read more)

0gwern
I'm saying that this stuff about 95% CI is a completely empty and broken promise; if we see the coverage blown routinely, as we do in particle physics in this specific case, the CI is completely useless - it didn't deliver what it was deductively promised. It's like have a Ouija board which is guaranteed to be right 95% of the time, but oh wait, it was right just 90% of the time so I guess it wasn't really a Oujia board after all. Even if we had this chimerical '95% confidence interval', we could never know that it was a genuine 95% confidence interval. I am reminded of Borges: It is universally admitted that the 95% confidence interval is a result of good coverage; such is declared in all the papers, textbooks, biographies of illustrious statisticians and other texts whose authority is unquestionable... (Given that "95% CIs" are not 95% CIs, I will content myself with honest credible intervals, which at least are what they pretend to be.)

I suspect you're talking past one another, but maybe I'm missing something. I skimmed the paper you linked and intend to come back to it in a few weeks, when I am less busy, but based on skimming, I would expect the frequentist to say something like, "You're showing me a finite collection of 95% confidence intervals for which it is not the case that 95% of them cover the truth, but the claim is that in the long run, 95% of them will cover the truth. And the claim about the long run is a mathematical fact."

I can see having worries that this doesn'... (read more)

0gwern
Well, I'll put it this way - if we take as our null hypothesis 'these 95% CIs really did have 95% coverage', would the observed coverage-rate have p<0.05? If it did, would you or him resort to 'No True Scotsman' again? (A hint as to the answer: just a few non-coverages drive the null down to extremely low levels - think about multiplying 0.05 by 0.05...)

The point depends on differences between confidence intervals and credible intervals.

Roughly, frequentist confidence intervals, but not Bayesian credible intervals, have the following coverage guarantee: if you repeat the sampling and analysis procedure over and over, in the long-run, the confidence intervals produced cover the truth some percentage of the time corresponding to the confidence level. If I set a 95% confidence level, then in the limit, 95% of the intervals I generate will cover the truth.

Bayesian credible intervals, on the other hand, tell u... (read more)

1gwern
Right. This is what my comment there was pointing out: in his very own example, physics, 95% CIs do not get you 95% coverage since when we look at particle physics's 95% CIs, they are too narrow. Just like his Bayesian's 95% credible intervals. So what's the point?

I don't see how this applies to ciphergoth's example. In the example under consideration, the person offering you the bet cannot make money, and the person offered the bet cannot lose money. The question is, "For which of two events would you like to be paid some set amount of money, say $5, in case it occurs?" One of the events is that a fair coin flip comes up heads. The other is an ordinary one-off occurrence, like the election of Obama in 2012 or the sun exploding tomorrow.

The goal is to elicit the degree of belief that the person has in the ... (read more)

0roystgnr
For a rational person with infinite processing power, my point doesn't apply. You can also neglect air resistance when determining the trajectory of a perfectly spherical cow in a vacuum. For a person of limited intelligence (i.e. all of us), it's typically necessary to pick easily-evaluated heuristics that can be used in place of detailed analysis of every decision. I last used my "people offering me free stuff out of nowhere are probably trying to scam me somehow" heuristic while opening mail a few days ago. If ciphergoth's interlocuter had been subconsciously thinking the same way, then this time they missed a valuable opportunity for introspection, but it's not immediately obvious that such false positive mistakes are worse than the increased possibility of false negatives that would be created if they instead tried to successfully outthink every "cannot lose" bet that comes their way.
0lmm
The person offering the bet still (presumably) wants to minimize their loss, so they would be more likely to offer it if the unknown occurrence was impossible than if it was certain.

The point applies well to evidentialists but not so well to personalists. If I am a personalist Bayesian -- the kind of Bayesian for which all of the nice coherence results apply -- then my priors just are my actual degrees of belief prior to conducting whatever experiment is at stake. If I do my elicitation correctly, then there is just no sense to saying that my prior is bullshit, regardless of whether it is calibrated well against whatever data someone else happens to think is relevant. Personalists simply don't accept any such calibration constraint... (read more)

I just want to know why he's only betting $50.

7beoShaffer
Because its funnier that way.
mwengler190

Because the stupider the prediction is that somebody is making, the harder it is to get them to put their money where their mouth is. The Bayesian is hoping that $50 is a price the other guy is willing to pay to signal his affiliation with the other non-Bayesians.

You could be right, but I am skeptical. I would like to see evidence -- preferably in the form of bibliometric analysis -- that practicing scientists who use frequentist statistical techniques (a) don't make use of background information, and (b) publish more successfully than comparable scientists who do make use of background information.

That depends heavily on what "the method" picks out. If you mean that the machinery of a null hypothesis significance test against a fixed-for-all-time significance level of 0.05, then I agree, the method doesn't promote good practice. But if we're talking about frequentism, then identifying the method with null hypothesis significance testing looks like attacking a straw man.

3Luke_A_Somers
I know a bunch of scientists who learned a ton of canned tricks and take the (frequentist) statisticians' word on how likely associations are... and the statisticians never bothered to ask how a priori likely these associations were. If this is a straw man, it is one that has regrettably been instantiated over and over again in real life.

Fair? No. Funny? Yes!

The main thing that jumps out at me is that the strip plays on a caricature of frequentists as unable or unwilling to use background information. (Yes, the strip also caricatures Bayesians as ultimately concerned with betting, which isn't always true either, but the frequentist is clearly the butt of the joke.) Anyway, Deborah Mayo has been picking on the misconception about frequentists for a while now: see here and here, for examples. I read Mayo as saying, roughly, that of course frequentists make use of background information... (read more)

4ChristianKl
If not using background information means you can publish your paper with frequentists methods, scientists often don't use background information. Those scientifists who don't use less background information get more significant results. Therefore they get more published papers. Then they get more funding than the people who use more background information. It's publish or perish.
7Luke_A_Somers
Good frequentists do that. The method itself doesn't promote this good practice.

Rather than consulting Wikipedia, the SEP article on consequentialism is probably the best place to start for an overview.

0aspera
Thanks, I'll check it out.
Load More