pjeby comments on It's all in your head-land - Less Wrong

32 Post author: colinmarshall 22 July 2009 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 23 July 2009 08:18:48PM 2 points [-]

He's saying that the dominant paradigm in the "soft" sciences is that you treat your subjects as black boxes performing semi-random transformation of inputs into outputs... without ever really trying to understand (in a completely reductionist way) what's going on inside the box.

The "hard" sciences don't work that way, of course: you don't need to test a thousand different pieces of iron and copper, just to get a statistical idea of which one maybe has a bigger heat capacity, for example.

To continue the analogy, it's as if the soft sciences have no calorimeters, thermometers, and scales with which to actually measure the relevant thing, and so instead are measuring something else that only weakly correlates with the thing we want to measure.

PCT, btw, proposes that behavior -- in the sense of actions taken by organisms -- is the "weakly correlated" thing, and that perceptual variables are the thing we actually want to measure. And that, with appropriate experimental design, we can isolate and measure those variables on a per-subject basis, eliminating the need to test huge groups just to get a vague idea of what's going on.

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

Comment author: orthonormal 24 July 2009 12:32:32AM *  5 points [-]

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

This doesn't make sense to me; sharper predictive success becomes unfavorable for publication? If this was written publicly, can you provide the source?

Comment author: pjeby 24 July 2009 01:11:50AM *  9 points [-]

From Teaching Dogma in Psychology, a lecture by Dr. Richard Marken, Associate Professor of Psychology at Ausberg College:

Psychologists see no real problem with the current dogma. They are used to getting messy results that can be dealt with only by statistics. In fact, I have now detected a positive suspicion of quality results amongst psychologists. In my experiments I get relationships between variables that are predictable to within 1 percent accuracy. The response to this level of perfection has been that the results must be trivial! It was even suggested to me that I use procedures that would reduce the quality of the results, the implication being that noisier data would mean more.

The lecture was Dr. Marken's farewell speech. After five years of unsuccessfully trying to interest his peers in the improved methods made possible by PCT (most lost interest when they understood enough to realize that it was a major paradigm shift), he chose to resign his professorship, rather than continue to teach what he had come to believe (as a result of his PCT studies) was an unscientific research paradigm. As he put it:

It would be like having to teach a whole course on creationism and then having a “by the way, this is the evolutionary perspective” section. Why waste time on non-science? From my point of view, most of what is done in the social sciences is scientific posturing and verbalizing.

It's an interesting read, whether you agree with his conclusions or not. Not a lot of people with the intellectual humility regarding their own field to accept the ideas of an outsider, question everything they've learned, and then resign when they realize they can't, in good conscience, teach the established dogma:

So my problem is what I, as a teacher, should do. I consider myself a highly qualified psychology professor. I want to teach psychology. But I don’t want to teach the dogma, which, as I have argued, is a waste of time. So, do I leave teaching and wait for the revolution to happen? I’m sure that won’t be for several decades. Thus I have a dilemma—the best thing for me to do is to teach, but I can’t, because what I teach doesn’t fit the dogma. Any suggestions?

Edit to add: it appears that, 20 years later, Dr. Marken is now considering a return to teaching, as he reports on his 25 years of PCT research.

Comment author: orthonormal 24 July 2009 07:24:14PM *  2 points [-]

Thanks for the citation. I know it's a bother to do so, but I'd appreciate it if you linked your sources more often when they're publicly available but unfamiliar to the rest of us.

Comment author: Vladimir_Nesov 24 July 2009 09:51:54AM *  0 points [-]

And the results were never published in any form? The revolutionary results were rejected by all publication venues in the field? This story is a lie, an excuse of this whining-based field:

It's the government's fault, that taxes you and suppresses the economy - if it weren't for that, you would be a great entrepreneur. It's the fault of those less competent who envy your excellence and slander you - if not for that, the whole world would pilgrimage to admire you. It's racism, or sexism, that keeps you down - if it weren't for that, you would have gotten so much further in life with the same effort.

Comment author: pjeby 26 July 2009 02:20:28AM 6 points [-]

And the results were never published in any form?

In the second link I gave, Marken self-cites 6 of his papers that were published in various journals over the years. See page 8 of the PDF. I don't know if there are any more publications than that, since Marken said he was only giving a 45-minute summary of his 25 years' work. (Oh, and before he learned PCT, he wrote a textbook on experimental design in psychology.)

However, I suspect that since you missed those bits, there's a very good chance you didn't read either of the links -- and that would be a mistake, if your goal is to understand, rather than to simply identify a weak spot to jump on. You have rounded a very long distance to an inaccurate cliche.

See, Marken never actually complained that he couldn't get published, he complained that he could not abide teaching pre-PCT psychology, as he considered it equivalent to pre-Darwin biology or pre-Galileo physics, and it would therefore be silly to spend most of a semester teaching the wrong thing in order to turn around at the end and explain why everything they just learned was wrong. That was the issue that led him to leave his professorship, not publication issues.

Comment author: Cyan 26 July 2009 02:53:00AM 3 points [-]

I was about to write that Marken should have considered getting an affiliation to an engineering department. Engineers love them some closed-loop systems, and there would probably have been scope for research into the design of human-machine interactions. Then I read his bio, and learned that that was pretty much what he did, only as a consultant, not an academic.

Comment author: Vladimir_Nesov 26 July 2009 07:37:25AM *  -1 points [-]

The words you used in the original comment don't lend themselves to the new interpretation. There was nothing about teaching in them:

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

Comment author: Cyan 26 July 2009 03:08:49PM *  7 points [-]

There's no contradiction between what pjeby wrote in his original comment and what he wrote subsequently about Marken. In this exchange, you seem to me to be suffering from a negative halo effect -- your (possibly fair) assessment of pjeby's interests and goals in writing on this site have made you uncharitable about this particular anecdote.

Comment author: Vladimir_Nesov 26 July 2009 04:31:44PM *  4 points [-]

You are right, I didn't reread the first Eby's comment in full before replying on the second, losing the context, and now that I did, even my first comment seems worded incorrectly.

Comment author: conchis 26 July 2009 03:17:17PM *  1 point [-]

But the subsequent comment was supposed to provide support for the original comment. (It was proffered in response to a request for such support). It therefore seems reasonable to criticise it if it fails to do so, doesn't it?

ETA: Apologies. This comment was stupid. I should learn to read. Withdrawn.

ETA2: Please don't upvote this! I shouldn't be able to gain karma by saying stupid things and then admitting that they were stupid. If we want to incentivise renouncing stupid comments, we should presumably downvote the originals and upvote the renunciations so that the net effect is zero. (Or, if you think the original was correct, I guess you could upvote it and downvote the renunciation.)

Comment author: Cyan 26 July 2009 03:36:25PM *  3 points [-]

But the subsequent comment was supposed to provide support for the original comment.

It did. See the first excerpt pjeby quoted in reply to orthonormal's query.

Comment author: conchis 26 July 2009 03:46:28PM *  1 point [-]

Sorry. You're right. I'm an idiot.

Comment author: pjeby 26 July 2009 04:20:13PM *  4 points [-]

Yes, there's nothing about teaching there. What's your point? The only reason I mentioned the teaching aspect is to debunk the nonsense you were spewing about him not being able to be published.

(For someone who claims to want to keep discussion quality high, and who claims to not want to get involved in long threads with me, you sure do go out of your way to start them, not to mention filling them misconceptions and projections.)

Comment author: SilasBarta 24 July 2009 12:39:10AM *  3 points [-]

Why doesn't it make sense? If "good results" in a field tend to be mediocre predictors at best, and then you submit an result with much, much better predictive power than anyone in the field could ever hope for, look at it from the perspective of the reviewer. Wouldn't such an article be strong evidence that you're being tricked or otherwise dealing with someone not worthy of more attention? (Remember the cold fusion case?)

And even if it doesn't make rationalist sense, isn't it understandable why academics wouldn't like being "one-upped" so badly, and so would suppress "too good" results for the wrong reasons?

Comment author: thomblake 24 July 2009 12:49:19AM *  1 point [-]

And even if it doesn't make rationalist sense, isn't it understandable why academics wouldn't like being "one-upped" so badly, and so would suppress "too good" results for the wrong reasons?

It's conceivable. But anyone who went into academia for the money is Doing It Wrong, so I tend to give academics the benefit of the doubt that they're enthusiastic about pursuing the betterment of their respective fields.

[It] sounds about as plausible to me as the idea that most viruses are created by Norton to keep them in business.

ETA: hmm... awkward wording. "It" above refers to the preceding hypothesis about academics acting in bad faith.

Comment author: Cyan 24 July 2009 02:35:37AM 6 points [-]

I have personally attended a session at a conference in which a researcher presented essentially perfect prediction of disease status using a biomarker approach and had his results challenged by an aggressive questioner. The presenter was no dunce, and argued only that the results suggested the line of research was promising. Nevertheless, the questioner felt the need to proclaim disbelief in presented results. No doubt the questioner thought he was pursuing the betterment of his field by doing so.

There's just a point where if someone claims to achieve results you think are impossible, "mistake or deception" becomes more likely to you than "good science".

Comment author: SilasBarta 24 July 2009 01:01:06AM *  1 point [-]

Easy there. I'm not advocating conspiracy theories. But it's not uncommon for results to be turned down because they're too good. Just off the top of my head, how much attention has the sociology/psychology community given to the PUA community, after the much greater results they've achieved in helping men?

How long did it take for the Everett Many-Worlds Interpretation to be acknowledged by Serious Academics?

Plus, status is addictive. Once you're at the top of the field, you may forget why joined it in the first place.

Comment author: thomblake 23 July 2009 08:26:49PM 0 points [-]

Thanks - I think the first half of that was helpful.