Comment author: pjeby 15 December 2013 11:30:16PM 8 points [-]

is there ever a time we should try to make ourselves believe things that we don't necessarily have a good reason to think are true?

This is less the problem than the part where we already believe lots of things that we don't have a good reason to think are true. Pessimists have a tendency to demand a higher burden of proof for positive thoughts than negative ones. If they were just as skeptical of their negative beliefs, more of the positive would get through!

That is, it's not that we have to add a bunch of beliefs in order to be positive, it's that we need to stop believing all sorts of pessimistic things, or at least believing that they're relevant, or that they're going to be a disaster.

If a thing you're pessimistic about isn't under your control, for example, then there's probably no point worrying about it. And if it is under your control, then you could focus on the part where you can do something.

The part where we struggle is when we (in effect) spend lots of time arguing over whether we control something or we don't, neither believing the matter is fully in hand, nor willing to dismiss it as not worth worrying about/not in one's control.

So one's pessimistic objections tend to be phrased as if out of one's control. If you think about accomplishing something, the objection might be an absolute like, "you'll never pull that off", instead of the more accurate belief of, "you'll never pull that off if you don't make some changes from what you did last time".

Bottom line: it's not about what's true or false, but about which thoughts are relevant to load into working memory. Many true things are not useful, and many useful things are only approximately true.

Comment author: Mayo 16 December 2013 03:36:20AM 1 point [-]

Just a couple of points on this discussion, which I'm sure I walked in at the middle of: (1) One thing it illustrates is the important difference between what one "should" believe in the sense of it being prudential in some way, versus a very different notion: what has or has not been sufficiently well probed to regard as warranted (e.g., as a solution to a problem, broadly conceived). Of course, if the problem happens to be "to promote luckiness", a well-tested solution could turn out to be "don't demand well-testedness, but think on the bright side."

(2) What I think is missing from some of this discussion is the importance of authenticity. Keeping up with contacts, and all the other behaviors, if performed as part of a contrived plan will backfire.

In response to comment by [deleted] on The Statistician's Fallacy
Comment author: ChrisHallquist 11 December 2013 11:59:35PM -1 points [-]

I replaced "orthodox statistics" with "frequentism" in the post in case that will make people happy, but as I understood him Ilya, wasn't just complaining about that, but also my own implied support for Bayesianism over frequentism. And maybe the standard LessWrong position on that debate is wrong, but to come in and announce that the LW view is wrong without argument, when it's been argued for at such great length, seems odd to put it midly.

Ilya comes across as not being aware of how much Eliezer and other people here have written about that debate. In fact, it's not even clear to me if he understands what someone like Eliezer (or for that matter, an academic epistemologist) means when they say "Bayesianism."

Comment author: Mayo 13 December 2013 03:17:29AM 4 points [-]

I realize Eliezer holds great sway on this blog, but I think people here ought to question a bit more closely some of his most winning arguments in favor of casting out frequents for Bayesianism. I've only read this blog around 4 times, and each time I've found a howler apparently accepted. But putting those aside, I find it curious that the results on psychological biases that is given so much weight on this blog are arrived at and affirmed by means of error statistical methodology. error statistics.com

Comment author: JoshuaZ 10 December 2013 01:50:15AM 5 points [-]

Bayesians will realize that, since there's a good chance that of happening even when the conclusion is correct and well-supported by the evidence, finding mistakes in the statistics is only weak evidence that the conclusion is wrong.

I'm not sure why you think this conclusion is particularly Bayesian.

she dismissed Bayesianism in favor of orthodox statistics

You mean frequentism right? Then just say so. At this point Bayesianism is so widespead and so many statisticians use in practice both frequentist and Bayesian techniques such using frequentism as intechangeable with "orthodox" seems off.

Comment author: Mayo 13 December 2013 03:04:55AM 6 points [-]

Frequentism is as abused as "orthodox statistics", and in any event, tends to evoke a conception of people interested in direct inference: assigning a probability (based on observed relative frequencies) to outcomes. Frequentism in statistical inference, instead, refers to the use of error probabilities--based on sampling distributions-- in order to assess and control a method's capability to probe a given discrepancy or inferential flaw of interest. Thus, a more suitable name would be error probability statistics, or just error statistics. One infers, for example, that a statistical hypothesis or other claim is well warranted or severely tested just to the extent that the method was highly capable of detecting the flaw, and yet routinely produces results indicating the absence of a flaw. But the most central role of statistical method in the error statistical philosophy is to block inferences on a variety of grounds, e.g., that the method had little capacity to distinguish between various factors, biases, failing to give the assumptions of the models used a sufficiently hard time.

But the real reason I wrote is because the first few sentences of this post made me think that perhaps the professor was me! I'm glad to hear there are other female philosophers of science who are frequentists. yet it wasn't me, given the rest of the post.

Comment author: Cyan 26 February 2010 08:49:22PM *  7 points [-]

Eliezer's views as expressed in Blueberry's links touch on a key identifying characteristic of frequentism: the tendency to think of probabilities as inherent properties of objects. More concretely, a pure frequentist (a being as rare as a pure Bayesian) treats probabilities as proper only to outcomes of a repeatable random experiment. (The definition of such a thing is pretty tricky, of course.)

What does that mean for frequentist statistical inference? Well, it's forbidden to assign probabilities to anything that is deterministic in your model of reality. So you have estimators, which are functions of the random data and thus random themselves, and you assess how good they are for your purpose by looking at their sampling distributions. You have confidence interval procedures, the endpoints of which are random variables, and you assess the sampling probability that the interval contains the true value of the parameter (and the width of the interval, to avoid pathological intervals that have nothing to do with the data). You have statistical hypothesis testing, which categorizes a simple hypothesis as “rejected” or “not rejected” based on a procedure assessed in terms of the sampling probability of an error in the categorization. You have, basically, anything you can come up with, provided you justify it in terms of its sampling properties over infinitely repeated random experiments.

In response to comment by Cyan on What is Bayesianism?
Comment author: Mayo 29 September 2013 07:24:43AM 4 points [-]

I'm sorry to see such wrongheaded views of frequentism here. Frequentists also assign probabilities to events where the probabilistic introduction is entirely based on limited information rather than a literal randomly generated phenomenon. If Fisher or Neyman was ever actually read by people purporting to understand frequentist/Bayesian issues, they'd have a radically different idea. Readers to this blog should take it upon themselves to check out some of the vast oversimplifications... And I'm sorry but Reichenbach's frequentism has very little to do with frequentist statistics--. Reichenbach, a philosopher, had an idea that propositions had frequentist probabilities. So scientific hypotheses--which would not be assigned probabilities by frequentist statisticians--could have frequentist probabilities for Reichenbach, even though he didn't think we knew enough yet to judge them. He thought at some point we'd be able to judge of a hypothesis of a type how frequently hypothesis like it would be true. I think it's a problematic idea, but my point was just to illustrate that some large items are being misrepresented here, and people sold a wrongheaded view. Just in case anyone cares. Sorry to interrupt the conversation (errorstatistics.com)

Comment author: lukeprog 08 September 2013 11:13:38PM 2 points [-]

Yeah. The problem is that most scientists seem to still be taught from textbooks that use a Popperian paradigm, or at least Popperian language, and they aren't necessarily taught probability theory very thoroughly, they're used to publishing papers that use p-value science even though they kinda know it's wrong, etc.

So maybe if we had an extended discussion about philosophy of science, they'd retract their Popperian statements and reformulate them to say something kinda related but less wrong. Maybe they're just sloppy with their philosophy of science when talking about subjects they don't put much credence in.

This does make it difficult to measure the degree to which, as Eliezer puts it, "the world is mad." Maybe the world looks mad when you take scientists' dinner party statements at face value, but looks less mad when you watch them try to solve problems they care about. On the other hand, even when looking at work they seem to care about, it often doesn't look like scientists know the basics of philosophy of science. Then again, maybe it's just an incentives problem. E.g. maybe the scientist's field basically requires you to publish with p-values, even if the scientists themselves are secretly Bayesians.

Comment author: Mayo 29 September 2013 06:52:12AM 4 points [-]

If there was a genuine philosophy of science illumination it would be clear that, despite the shortcomings of the logical empiricist setting in which Popper found himself , there is much more of value in a sophisticated Popperian methodological falsificationism than in Bayesianism. If scientists were interested in the most probable hypotheses, they would stay as close to the data as possible. But in fact they want interesting, informative, risky theories and genuine explanations. This goes against the Bayesian probabilist ideal. Moreover, you cannot falsify with Bayes theorem, so you'd have to start out with an exhaustive set of hypotheses that could account for data (already silly), and then you'd never get rid of them---they could only be probabilistically disconfirmed.

Comment author: jsteinhardt 15 September 2013 01:34:42AM 4 points [-]

For what it's worth, I understand well the arguments in favor of Bayes, yet I don't think that scientific results should be published in a Bayesian manner. This is not to say that I don't think that frequentist statistics is frequently and grossly mis-used by many scientists, but I don't think Bayes is the solution to this. In fact, many of the problems with how statistics is used, such as implicitly performing many multiple comparisons without controlling for this, would be just as large of problems with Bayesian statistics.

Either the evidence is strong enough to overwhelm any reasonable prior, in which case frequentist statistics wlil detect the result just fine; or else the evidence is not so strong, in which case you are reduced to arguing about priors, which seems bad if the goal is to create a societal construct that reliable uncovers useful new truths.

Comment author: Mayo 29 September 2013 06:44:56AM 4 points [-]

No, the multiple comparisons problem, like optional stopping, and other selection effects that alter error probabilities are a much greater problem in Bayesian statistics because they regard error probabilities and the sampling distributions on which they are based as irrelevant to inference, once the data are in hand. That is a consequence of the likelihood principle (which follows from inference by Bayes theorem). I find it interesting that this blog takes a great interest in human biases, but guess what methodology is relied upon to provide evidence of those biases? Frequentist methods.

Comment author: Mayo 16 June 2013 09:46:29AM 4 points [-]

Y'all are/were having a better discussion here than we've had on my blog for a while....came across by chance. Corey understands error statistics.