komponisto comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 19 November 2010 01:30:48PM *  1 point [-]

I don't disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person's belief in them raises serious doubts about that person's understanding of science at the very least, if not their general rationality level.

Comment author: Risto_Saarelma 19 November 2010 02:42:16PM *  1 point [-]

I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There's a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren't subject to the inverse square law, so this isn't a new idea.

Damien Broderick's attitude in his book is basically that there's a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel's attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than "would need extra particles" to show as nonsense.

"Not understanding basic physics" doesn't really seem to cut it in either case. "It's been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn't anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn't have" is basically the one I've got.

I'm not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I'm still waiting for someone more physics-literate to have a go at Goertzel's pilot wave paper.

Comment author: komponisto 19 November 2010 03:24:26PM *  0 points [-]

I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use...

"Not understanding basic physics" doesn't really seem to cut it in either case

"Not understanding basic physics" sounds like a harsh quasi-social criticism, like "failing at high-school material". But that's not exactly what's meant here. Rather, what's meant is more like "not being aware of how strong the evidence against psi from 20th-century physics research is".

The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.

Comment author: Jack 19 November 2010 04:03:15PM 0 points [-]

The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.

Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.

Comment author: komponisto 19 November 2010 11:36:29PM 0 points [-]

Evidence distinguishes between not for individual models.

By the Bayesian definition of evidence, "evidence for" a hypothesis (including a "model", which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.

There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.

Carroll claims that current data implies the probability of such models being correct is near zero. So I'd like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll's (and others') mistake?

Comment author: Jack 22 November 2010 04:59:50PM *  1 point [-]

including a "model", which is just a name for a complex conjunction of hypotheses

If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.

That is all I meant.

Comment author: wnoise 20 November 2010 01:27:48AM 0 points [-]

By the Bayesian definition of evidence, "evidence for" a hypothesis (including a "model", which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.

Yes, but this depends on what other hypotheses are considered in the "false" case.

Comment author: komponisto 20 November 2010 02:09:17AM *  0 points [-]

The "false" case is the disjunction of all other possible hypotheses besides the one you're considering.

Comment author: wnoise 20 November 2010 02:58:35AM *  0 points [-]

That's not computable. (EDIT: or even well defined). One typically works with some limited ensemble of possible hypotheses.

Comment author: komponisto 20 November 2010 03:53:37AM *  0 points [-]

One typically works with some limited ensemble of possible hypotheses

Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional "something not on this list" hypothesis that covers everything else

You appear to be thinking in terms of ad-hoc statistical techniques ("computable", "one typically works..."), rather than fundamental laws governing belief. But the latter is what we're interested in in this context: we want to know what's true and how to think, not what we can publish and how to write it up.

Comment author: wnoise 20 November 2010 08:23:19AM *  0 points [-]

Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.

As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.

(This is obviously true when we look at P(H_i|e). It's a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of "all the other theories", than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It's also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)

It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as "log odds", for instance is only useful for talking about comparing two specific hypotheses, not "this hypothesis" and "everything else".

But I still have objections to most you say.

You've given an essentially operational definition of "evidence for" in terms of operations that can't be done.

Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional "something not on this list" hypothesis that covers everything else.

Yes. The standard way to express that is that you can't actually work with P(Hypothesis), only P(Hypothesis | Model Space).

You can then, of course expand your model spaces, if you find your model space is inadequate.

You appear to be thinking in terms of ad-hoc statistical techniques ("computable",

"Computable" is hardly ad-hoc. It's a fundamental restriction on how it is possible to reason.

we want to know what's true and how to think,

If you want to know how to think, you had better pick a method that's actually possible.

This really is just another facet of "all Bayesian probabilities are conditional."

Comment author: Jack 19 November 2010 02:02:42PM 0 points [-]

This isn't someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn't obviously the case, waving our hands and throwing out these words isn't an explanation of the results. I'm going to try and make a post on this subject a priority now.

Comment author: komponisto 19 November 2010 02:19:46PM *  3 points [-]

This isn't someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs

Did you read the linked post by Sean Carroll? Parapsychologists aren't condemned for holding a similar position to the uneducated; they're condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century's worth of experimentally confirmed physical knowledge is far from hand-waving.

Humans are still confused enough about the world that there is room for change in our current understanding of physics

Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.

Now, I'm not a physicist, so if I'm actually wrong about any of this, I'm willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.

Comment author: Jack 19 November 2010 03:48:57PM 2 points [-]

Physicists are not confused in the relevant regimes here.

We're don't know what the relevant regimes are here. Obviously human brains aren't producing force fields that are bending spoons.

We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven't seen yet and we don't have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).

On the other hand, maybe our physics isn't complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we're in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn't shown that all but error/fraud/bias have been ruled out.

Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.

I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren't familiar with the results of the field. I recommend Outside the Gates of Science if you haven't read it yet.

Comment author: shokwave 19 November 2010 04:20:01PM 3 points [-]

It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that - through pure, random chance - happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event - but he's spent eight years trying to make it happen, and so happen it eventually has. Good for him!

The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that's going on here.

Comment author: Jack 19 November 2010 04:36:41PM *  0 points [-]

Right, like I said, publication bias is a possibility. But in Honorton's precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem's lone study, that troubles me.

Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.

What evidence is there for this?

Comment author: shokwave 20 November 2010 06:27:52AM *  0 points [-]

Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.

What evidence is there for this?

From here,

The paper ... is the culmination of eight years' work by Daryl Bem of Cornell University in Ithaca, New York.

Volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image's eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.

Comment author: Jack 22 November 2010 04:13:01PM 0 points [-]

Why do we think this means early test groups weren't included in the study? It just sounds like it took eight years to get the large sample size he wanted.

Comment author: shokwave 23 November 2010 12:42:35AM 0 points [-]

I think that it means that early test groups weren't included because that is the easiest way to produce the results we're seeing.

It just sounds like it took eight years to get the large sample size he wanted.

Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying "I took 4 years to make sure the sample size was large enough."

Comment author: Jack 23 November 2010 01:15:57AM 0 points [-]

Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can't just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn't that hard it seems silly just to assume.