Comment author: Matt_Simpson 02 July 2010 08:28:50PM *  0 points [-]

Yesterday, I posted my thoughts in last month's thread on the article. I'm reproducing them here since this is where the discussion is at:

[cousin_it summarizing Gelman's position] See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?

Model checking is completely compatible with "perfect Bayesianism." In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It's a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.

However, a difference in the data and a simulation from your model doesn't necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what's going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.

Comment author: cupholder 05 July 2010 08:44:09PM 0 points [-]

After the fact model checking is completely incompatible with perfect Bayesianism, if we define perfect Bayesianism as

  1. Define a model with some parameters.
  2. Pick a prior over the parameters.
  3. Collect evidence.
  4. Calculate the likelihood using the evidence and model.
  5. Calculate the posterior by multiplying the prior by the likelihood.
  6. When new evidence comes in, set the prior to the posterior and go to step 4.

There's no step for checking if you should reject the model; there's no provision here for deciding if you 'just have really wrong priors.' In practice, of course, we often do check to see if the model makes sense in light of new evidence, but then I wouldn't think we're operating like perfect Bayesians any more. I would expect a perfect Bayesian to operate according to the Cox-Jaynes-Yudkowsky way of thinking, which (if I understand them right) has no provision for model checking, only for updating according to the prior (or previous posterior) and likelihood.

Comment author: JoshuaZ 05 July 2010 03:10:26PM *  4 points [-]

The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.

Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn't a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.

But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He's also a highly ranked chess master. He's clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren't smart (There's some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn't just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.

It does however seem that on LW there's a common tendency to label beliefs silly when they mean "I assign a very low probability to this belief being correct." Or "I don't understand how someone's mind could be so warped as to have this belief." Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.

Comment author: cupholder 05 July 2010 08:21:00PM 2 points [-]

Do people here really think that antinatalism is silly?

A data point: I don't think antinatalism (as defined by Roko above - 'it is a bad thing to create people') is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child's life would be equally bad, it'd be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

Comment author: Roko 05 July 2010 10:24:23AM *  6 points [-]

Robert Ettinger's surprise at the incompetence of the establishment:

Robert Ettinger waited expectantly for prominent scientists or physicians to come to the same conclusion he had, and to take a position of public advocacy. By 1960, Ettinger finally made the scientific case for the idea, which had always been in the back of his mind. Ettinger was 42 years old and said he was increasingly aware of his own mortality.[7] In what has been characterized as an historically important mid-life crisis,[7] Ettinger summarized the idea of cryonics in a few pages, with the emphasis on life insurance, and sent this to approximately 200 people whom he selected from Who's Who in America.[7] The response was very small, and it was clear that a much longer exposition was needed— mostly to counter cultural bias. Ettinger correctly saw that people, even the intellectually, financially and socially distinguished, would have to be educated into understanding his belief that dying is usually gradual and could be a reversible process, and that freezing damage is so limited (even though fatal by present criteria) that its reversibility demands relatively little in future progress.

Ettinger soon made an even more troubling discovery, principally that "a great many people have to be coaxed into admitting that life is better than death, healthy is better than sick, smart is better than stupid, and immortality might be worth the trouble!"

Maybe if I publish a clear scientifically minded book they'll listen?

Following publication of The Prospect of Immortality (1962) Robert Ettinger again waited for prominent scientists, industrialists, or others in authority to see the wisdom of his idea and begin implementing it.

He is still waiting!

I write this because a prominent claim of the SIAI founders (Vassar especially) is that we vastly overestimate the competence of both society in general, and of the elites who run it.

Another example along the same lines is the relative non-response to the publication of nanosystems, especially the National Nanotech Initiative fiasco.

Comment author: cupholder 05 July 2010 11:32:40AM 2 points [-]

A good illustration of multiple discovery (not strictly 'discovery' in this case, but anyway) too:

While Ettinger was the first, most articulate, and most scientifically credible person to argue the idea of cryonics,[citation needed] he was not the only one. In 1962, Evan Cooper had authored a manuscript entitled Immortality, Scientifically, Physically, Now under the pseudonym "N. Durhing".[8] Cooper's book contained the same argument as did Ettinger's, but it lacked both scientific and technical rigor and was not of publication quality.[citation needed]

Comment author: steven0461 04 July 2010 09:48:36PM 1 point [-]

I thought I did a search but apparently not; sorry.

Comment author: cupholder 04 July 2010 09:56:30PM *  1 point [-]

In the long run, it's all good - I think it's a decent paper, and I suppose this way more eyeballs see it than if I was the only one to post it. (Not to say that we should make a regular habit of linking things four times :-)

Comment author: NancyLebovitz 04 July 2010 02:47:23PM 2 points [-]

My impression was that the idea that schizophrenia runs in families was dismissed as an old wives' tale, but a fast google search isn't turning up anything along those lines, though it does seem that some Freudians believed schizophenia was a mental rather than physical disorder.

Comment author: cupholder 04 July 2010 09:43:09PM *  3 points [-]

My understanding is that historically, schizophrenia has been presumed to have a partly genetic cause since around 1910, out of which grew an intermittent research program of family and twin studies to probe schizophrenia genetics. An opposing camp that emphasized environmental effects emerged in the wake of the Nazi eugenics program and the realization that complex psychological traits needn't follow trivial Mendelian patterns of inheritance. Both research traditions continue to the present day.

Edit to add - Franz Josef Kallman, whose bibliography in schizophrenia genetics I somewhat glibly linked to in the grandparent comment, is one of the scientists who was most firmly in the genetic camp. His work (so far as I know) dominated the study of schizophrenia's causes between the World Wars, and for some time afterwards.

Comment author: NancyLebovitz 04 July 2010 12:57:01PM 1 point [-]

That there's a hereditary component to schizophrenia.

Comment author: cupholder 04 July 2010 02:34:07PM *  1 point [-]
Comment author: RichardKennaway 04 July 2010 08:55:44AM 8 points [-]

It's a dreadful graphic. No information leaps out at the viewer, you have to hunt through two tables for the meanings of the letters and numbers. It takes an effort to find the letter for any given block, or to find the block for any given letter, in radii far from where the letters appear. It's difficult to tell apart yellow and gold, or grey and silver: the key only serves to highlight how indistinguishable the colours are.

And since this graphic does not work, I cannot see it as beautiful. It is an ugly sacrifice of function to superficial prettiness.

Comment author: cupholder 04 July 2010 11:22:18AM *  1 point [-]

Agreed. I wish they'd stick to calling hard-to-read graphics like this 'visualizations' - the word 'infographics' implies a graphic designed to efficiently display information.

The worst part is it wouldn't be hard to improve the graphic. They could drop the annoying 84-item list and just directly write the emotions in the 84 slots around the circle instead of using numbers. Enlarge the circle and blow up the font size a bit - then they can put the A to J list of cultures into the empty middle of the circle so you don't have to keep looking off the side to cross-reference it. That'd help, even if it wouldn't fix it.

Edit - I see that when they used that infographic as their book's cover, they gave up on the idea of making it a real infographic and just made it into a pretty flower!

Comment author: JohannesDahlstrom 03 July 2010 09:11:56AM *  7 points [-]

http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/

Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.

From a rationality point of view, is it better to be inconsistent than consistently wrong?

There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-pants boffin types just may not seem a very good idea.

Comment author: cupholder 04 July 2010 08:11:18AM 2 points [-]

See also 'crank magnetism.'

I wonder if this counts as evidence for my heuristic of judging how seriously to take someone's belief on a complicated scientific subject by looking to see if they get the right answer on easier scientific questions.

Comment author: Unnamed 04 July 2010 05:53:22AM 2 points [-]

You're third, after steven0461 and nhamann.

Comment author: cupholder 04 July 2010 06:14:57AM 3 points [-]
Comment author: Douglas_Knight 03 July 2010 07:36:16PM 1 point [-]

Half of those hits are in the social sciences. I suspect that is economists defining the rational agents they study as bayesian, but that is rather different from the economists being bayesian themselves! The other half are in math & staticstics is probably that bayesian statisticians are becoming more common, which you might count as science (and 10% are in science proper).

Anyhow, it's clear from the context (I'd have thought from the quote) that he just means that the vast majority of scientists are not interested in defining science precisely.

Comment author: cupholder 04 July 2010 05:47:43AM 0 points [-]

It might well have been clear from the quote itself, but not to me - I just read the quote as saying Bayesian thinking and Bayesian methods haven't become more popular in science, which doesn't mesh with my intuition/experience.

View more: Prev | Next