Risto_Saarelma comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 19 November 2010 01:30:48PM *  1 point [-]

I don't disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person's belief in them raises serious doubts about that person's understanding of science at the very least, if not their general rationality level.

Comment author: Risto_Saarelma 19 November 2010 02:42:16PM *  1 point [-]

I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There's a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren't subject to the inverse square law, so this isn't a new idea.

Damien Broderick's attitude in his book is basically that there's a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel's attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than "would need extra particles" to show as nonsense.

"Not understanding basic physics" doesn't really seem to cut it in either case. "It's been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn't anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn't have" is basically the one I've got.

I'm not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I'm still waiting for someone more physics-literate to have a go at Goertzel's pilot wave paper.

Comment author: komponisto 19 November 2010 03:24:26PM *  0 points [-]

I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use...

"Not understanding basic physics" doesn't really seem to cut it in either case

"Not understanding basic physics" sounds like a harsh quasi-social criticism, like "failing at high-school material". But that's not exactly what's meant here. Rather, what's meant is more like "not being aware of how strong the evidence against psi from 20th-century physics research is".

The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.

Comment author: Jack 19 November 2010 04:03:15PM 0 points [-]

The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.

Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.

Comment author: komponisto 19 November 2010 11:36:29PM 0 points [-]

Evidence distinguishes between not for individual models.

By the Bayesian definition of evidence, "evidence for" a hypothesis (including a "model", which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.

There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.

Carroll claims that current data implies the probability of such models being correct is near zero. So I'd like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll's (and others') mistake?

Comment author: Jack 22 November 2010 04:59:50PM *  1 point [-]

including a "model", which is just a name for a complex conjunction of hypotheses

If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.

That is all I meant.

Comment author: wnoise 20 November 2010 01:27:48AM 0 points [-]

By the Bayesian definition of evidence, "evidence for" a hypothesis (including a "model", which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.

Yes, but this depends on what other hypotheses are considered in the "false" case.

Comment author: komponisto 20 November 2010 02:09:17AM *  0 points [-]

The "false" case is the disjunction of all other possible hypotheses besides the one you're considering.

Comment author: wnoise 20 November 2010 02:58:35AM *  0 points [-]

That's not computable. (EDIT: or even well defined). One typically works with some limited ensemble of possible hypotheses.

Comment author: komponisto 20 November 2010 03:53:37AM *  0 points [-]

One typically works with some limited ensemble of possible hypotheses

Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional "something not on this list" hypothesis that covers everything else

You appear to be thinking in terms of ad-hoc statistical techniques ("computable", "one typically works..."), rather than fundamental laws governing belief. But the latter is what we're interested in in this context: we want to know what's true and how to think, not what we can publish and how to write it up.

Comment author: wnoise 20 November 2010 08:23:19AM *  0 points [-]

Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.

As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.

(This is obviously true when we look at P(H_i|e). It's a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of "all the other theories", than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It's also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)

It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as "log odds", for instance is only useful for talking about comparing two specific hypotheses, not "this hypothesis" and "everything else".

But I still have objections to most you say.

You've given an essentially operational definition of "evidence for" in terms of operations that can't be done.

Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional "something not on this list" hypothesis that covers everything else.

Yes. The standard way to express that is that you can't actually work with P(Hypothesis), only P(Hypothesis | Model Space).

You can then, of course expand your model spaces, if you find your model space is inadequate.

You appear to be thinking in terms of ad-hoc statistical techniques ("computable",

"Computable" is hardly ad-hoc. It's a fundamental restriction on how it is possible to reason.

we want to know what's true and how to think,

If you want to know how to think, you had better pick a method that's actually possible.

This really is just another facet of "all Bayesian probabilities are conditional."