orthonormal comments on It's all in your head-land - Less Wrong

32 Post author: colinmarshall 22 July 2009 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread.

Comment author: orthonormal 22 July 2009 10:23:24PM *  11 points [-]

First off, welcome to Less Wrong! Check out the welcome thread if you haven't already.

You have a good writing style, but I hope you'll pardon me if I make a few suggestions based on the usual audience for Less Wrong posts:

Typically, a post of this length should be broken up into a sequence; you run the risk of "too long; didn't read" reactions after 1000 words, let alone 3000, and the conversation in the comments is usually sharper if the post has a single narrow focus. Usually, the analysis of a situation and the recommendations become separate posts if both are substantial.

Secondly, with the notable exception (sometimes) of P.J. Eby, we're often mistrustful of theories borne of introspection and anecdotes, and especially of recommendations based on such theories. There's therefore a norm of looking for and linking to experimental confirmation where it exists, and being doubly cautious if it doesn't. In this case, for instance, you could find some experimental evidence on choking that supports your thesis. This also forces you to think carefully about what sort of things your model predicts and doesn't predict, since at first glance it seems vague to the point of danger. The more specific you can get about these phenomena, the more useful your post will be.

Comment author: Yvain 23 July 2009 12:13:09PM *  12 points [-]

Although I agree that a theory born of empirical evidence is better than one born of introspection, I think it is kind of dangerous to introspect, develop a theory, and then when you're posting it on Less Wrong look for some evidence to support it so that you can say it's empirical. It risks reducing The Procurement of Evidence to a ritual.

See, the problem is, he could probably tie the evidence about choking into his theory. But if he had the opposite theory, he could probably tie studies like the ones showing mental practice can improve sports performance and the one showing that problem-solving areas of the brain are highly active when we daydream in to support that. That means that the fact that he can find a tangentially related study doesn't really make it more likely that the post is true. It'd just make us feel all nice and empirical

The matter would be different if there happened to be a study about this exact topic, or if there had been some study that had inspired him to come up with this theory. But "come up with theory, find supporting evidence" seems dangerous to me.

Comment author: Vladimir_Nesov 23 July 2009 12:17:24PM 3 points [-]

Isn't the answer simply that one shouldn't misinterpret what it means for evidence to be supporting?

Comment author: orthonormal 23 July 2009 06:45:07PM *  0 points [-]

Oh, good point. I think of "come up with theory, think about what it implies, look for evidence one way or the other" as the ideal, but the difficulty is that confirming information is more salient in my memory than disconfirming.

On the other hand, filtered evidence is still evidence, and a lack of outside evidence can be a sign that there's no good confirming evidence. (Or, in this case, just a sign that the poster is new around here.)

Comment author: colinmarshall 22 July 2009 10:35:05PM *  4 points [-]

Thanks; duly noted. I plan to write a few posts on the "road testing" of Less Wrong and Less Wrong-y theories about rationality and the defeat of akrasia, so these are helpful pointers.

Comment author: pjeby 22 July 2009 10:51:44PM 8 points [-]

Secondly, with the notable exception (sometimes) of P.J. Eby, we're often mistrustful of theories borne of introspection and anecdotes, and especially of recommendations based on such theories.

I think you underestimate just how mistrustful of introspection and armchair theorizing I am. For example, I'm certainly mistrustful of the armchair theorizing you're doing right now. ;-)

In the specific area of akrasia and practical arts of motivation, I am especially mistrustful of the theorizing that accompanies most psychology experiments I read about -- even when I bypass the popularized version and go to the original paper.

The typical paper I end up seeing combines a wild speculation with a spectacularly underwhelming actual result, due in large part to stupid methodological mistakes, like trying to statistically analyze people as groups, rather than ever finding out what individuals are doing, or controlling for what those individuals do/don't understand about a procedure, or if they're even following the procedure they're supposed to.

If you do the exact same thing with 100 people, you will be rather lucky to not get 100 different results. So to get more than an interesting anecdote out of an experiment, you'd better be able to vary what you're doing on a more individual basis.

Which is also why it's usually ill-advised to try to take psych-paper speculations and turn them into practical advice, versus copying what somebody reasonably similar to you has anecdotally done and received results from. The latter is FAR more likely to directly translate to something useful.

Comment author: MichaelVassar 23 July 2009 07:03:10PM 2 points [-]

It seems to me that this is an excellent example of considering the construction of a predictive model of the internal workings of a black box to be "real science" while the dominant paradigm has become one of treating statistical models of the black boxes as performing random number generator controlled transformations of inputs into outputs to be "real science".

Comment author: pjeby 23 July 2009 07:16:29PM 1 point [-]

To be fair, not all psychological statistics are bunk. It's just that it's incredibly slow the way it's done, and all you can get from any one experiment is a vague idea like, "thinking concretely about a task makes it more likely you'll do it." Direct marketers knew that ages ago.

Comment author: thomblake 23 July 2009 07:18:14PM *  1 point [-]

Direct marketers knew that ages ago.

For certain values of "knew".

Science has different epistemic standards.

ETA: though you're correct to point out that the papers mentioned above don't seem to follow them very well.

Comment author: Douglas_Knight 24 July 2009 05:52:03AM 3 points [-]

For certain values of "knew".
Science has different epistemic standards.

The marketers knew it well enough that the scientists should have studied it. That they didn't was a serious epistemic failing; it's not clear that these different standards are better. Denying something on the grounds that you haven't studied it enough and refusing to study it is almost a fully general counterargument.

Comment author: pjeby 23 July 2009 07:26:37PM 2 points [-]

Of course. Unfortunately for people needing personal and practical applications, science isn't caught up and may never be, precisely because they're not looking for the same kinds of things. (They're looking for "true" rather than "useful".)

Comment author: thomblake 23 July 2009 07:06:45PM 0 points [-]

I couldn't parse this. Could you maybe explain it in multiple (shorter) sentences?

Comment author: pjeby 23 July 2009 08:18:48PM 2 points [-]

He's saying that the dominant paradigm in the "soft" sciences is that you treat your subjects as black boxes performing semi-random transformation of inputs into outputs... without ever really trying to understand (in a completely reductionist way) what's going on inside the box.

The "hard" sciences don't work that way, of course: you don't need to test a thousand different pieces of iron and copper, just to get a statistical idea of which one maybe has a bigger heat capacity, for example.

To continue the analogy, it's as if the soft sciences have no calorimeters, thermometers, and scales with which to actually measure the relevant thing, and so instead are measuring something else that only weakly correlates with the thing we want to measure.

PCT, btw, proposes that behavior -- in the sense of actions taken by organisms -- is the "weakly correlated" thing, and that perceptual variables are the thing we actually want to measure. And that, with appropriate experimental design, we can isolate and measure those variables on a per-subject basis, eliminating the need to test huge groups just to get a vague idea of what's going on.

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

Comment author: orthonormal 24 July 2009 12:32:32AM *  5 points [-]

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

This doesn't make sense to me; sharper predictive success becomes unfavorable for publication? If this was written publicly, can you provide the source?

Comment author: pjeby 24 July 2009 01:11:50AM *  9 points [-]

From Teaching Dogma in Psychology, a lecture by Dr. Richard Marken, Associate Professor of Psychology at Ausberg College:

Psychologists see no real problem with the current dogma. They are used to getting messy results that can be dealt with only by statistics. In fact, I have now detected a positive suspicion of quality results amongst psychologists. In my experiments I get relationships between variables that are predictable to within 1 percent accuracy. The response to this level of perfection has been that the results must be trivial! It was even suggested to me that I use procedures that would reduce the quality of the results, the implication being that noisier data would mean more.

The lecture was Dr. Marken's farewell speech. After five years of unsuccessfully trying to interest his peers in the improved methods made possible by PCT (most lost interest when they understood enough to realize that it was a major paradigm shift), he chose to resign his professorship, rather than continue to teach what he had come to believe (as a result of his PCT studies) was an unscientific research paradigm. As he put it:

It would be like having to teach a whole course on creationism and then having a “by the way, this is the evolutionary perspective” section. Why waste time on non-science? From my point of view, most of what is done in the social sciences is scientific posturing and verbalizing.

It's an interesting read, whether you agree with his conclusions or not. Not a lot of people with the intellectual humility regarding their own field to accept the ideas of an outsider, question everything they've learned, and then resign when they realize they can't, in good conscience, teach the established dogma:

So my problem is what I, as a teacher, should do. I consider myself a highly qualified psychology professor. I want to teach psychology. But I don’t want to teach the dogma, which, as I have argued, is a waste of time. So, do I leave teaching and wait for the revolution to happen? I’m sure that won’t be for several decades. Thus I have a dilemma—the best thing for me to do is to teach, but I can’t, because what I teach doesn’t fit the dogma. Any suggestions?

Edit to add: it appears that, 20 years later, Dr. Marken is now considering a return to teaching, as he reports on his 25 years of PCT research.

Comment author: orthonormal 24 July 2009 07:24:14PM *  2 points [-]

Thanks for the citation. I know it's a bother to do so, but I'd appreciate it if you linked your sources more often when they're publicly available but unfamiliar to the rest of us.

Comment author: Vladimir_Nesov 24 July 2009 09:51:54AM *  0 points [-]

And the results were never published in any form? The revolutionary results were rejected by all publication venues in the field? This story is a lie, an excuse of this whining-based field:

It's the government's fault, that taxes you and suppresses the economy - if it weren't for that, you would be a great entrepreneur. It's the fault of those less competent who envy your excellence and slander you - if not for that, the whole world would pilgrimage to admire you. It's racism, or sexism, that keeps you down - if it weren't for that, you would have gotten so much further in life with the same effort.

Comment author: pjeby 26 July 2009 02:20:28AM 6 points [-]

And the results were never published in any form?

In the second link I gave, Marken self-cites 6 of his papers that were published in various journals over the years. See page 8 of the PDF. I don't know if there are any more publications than that, since Marken said he was only giving a 45-minute summary of his 25 years' work. (Oh, and before he learned PCT, he wrote a textbook on experimental design in psychology.)

However, I suspect that since you missed those bits, there's a very good chance you didn't read either of the links -- and that would be a mistake, if your goal is to understand, rather than to simply identify a weak spot to jump on. You have rounded a very long distance to an inaccurate cliche.

See, Marken never actually complained that he couldn't get published, he complained that he could not abide teaching pre-PCT psychology, as he considered it equivalent to pre-Darwin biology or pre-Galileo physics, and it would therefore be silly to spend most of a semester teaching the wrong thing in order to turn around at the end and explain why everything they just learned was wrong. That was the issue that led him to leave his professorship, not publication issues.

Comment author: Cyan 26 July 2009 02:53:00AM 3 points [-]

I was about to write that Marken should have considered getting an affiliation to an engineering department. Engineers love them some closed-loop systems, and there would probably have been scope for research into the design of human-machine interactions. Then I read his bio, and learned that that was pretty much what he did, only as a consultant, not an academic.

Comment author: Vladimir_Nesov 26 July 2009 07:37:25AM *  -1 points [-]

The words you used in the original comment don't lend themselves to the new interpretation. There was nothing about teaching in them:

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

Comment author: SilasBarta 24 July 2009 12:39:10AM *  3 points [-]

Why doesn't it make sense? If "good results" in a field tend to be mediocre predictors at best, and then you submit an result with much, much better predictive power than anyone in the field could ever hope for, look at it from the perspective of the reviewer. Wouldn't such an article be strong evidence that you're being tricked or otherwise dealing with someone not worthy of more attention? (Remember the cold fusion case?)

And even if it doesn't make rationalist sense, isn't it understandable why academics wouldn't like being "one-upped" so badly, and so would suppress "too good" results for the wrong reasons?

Comment author: thomblake 24 July 2009 12:49:19AM *  1 point [-]

And even if it doesn't make rationalist sense, isn't it understandable why academics wouldn't like being "one-upped" so badly, and so would suppress "too good" results for the wrong reasons?

It's conceivable. But anyone who went into academia for the money is Doing It Wrong, so I tend to give academics the benefit of the doubt that they're enthusiastic about pursuing the betterment of their respective fields.

[It] sounds about as plausible to me as the idea that most viruses are created by Norton to keep them in business.

ETA: hmm... awkward wording. "It" above refers to the preceding hypothesis about academics acting in bad faith.

Comment author: Cyan 24 July 2009 02:35:37AM 6 points [-]

I have personally attended a session at a conference in which a researcher presented essentially perfect prediction of disease status using a biomarker approach and had his results challenged by an aggressive questioner. The presenter was no dunce, and argued only that the results suggested the line of research was promising. Nevertheless, the questioner felt the need to proclaim disbelief in presented results. No doubt the questioner thought he was pursuing the betterment of his field by doing so.

There's just a point where if someone claims to achieve results you think are impossible, "mistake or deception" becomes more likely to you than "good science".

Comment author: SilasBarta 24 July 2009 01:01:06AM *  1 point [-]

Easy there. I'm not advocating conspiracy theories. But it's not uncommon for results to be turned down because they're too good. Just off the top of my head, how much attention has the sociology/psychology community given to the PUA community, after the much greater results they've achieved in helping men?

How long did it take for the Everett Many-Worlds Interpretation to be acknowledged by Serious Academics?

Plus, status is addictive. Once you're at the top of the field, you may forget why joined it in the first place.

Comment author: thomblake 23 July 2009 08:26:49PM 0 points [-]

Thanks - I think the first half of that was helpful.

Comment author: Shae 23 July 2009 02:10:32PM 2 points [-]

"Typically, a post of this length should be broken up into a sequence; you run the risk of 'too long; didn't read' "

Possibly true in general, but I found this article so fascinating I didn't have any trouble getting through it.

Comment author: self-actualizing 23 July 2009 12:35:25PM 1 point [-]

I finally created an account just so I could 'up-vote' this post, which I enjoyed. I think it shows a depth of thought and introspection that is very helpful. Perhaps this post could be the start of a series?

Comment author: colinmarshall 23 July 2009 02:54:22PM 1 point [-]

I'd like to make it that, but we'll see what I can do.