From David Foster Wallace's Infinite Jest:

He could do the dextral pain the same way: Abiding. No one single instant of it was unendurable. Here was a second right here: he endured it. What was undealable-with was the thought of all the instants all lined up and stretching ahead, glittering. And the projected future fear... It's too much to think about. To Abide there. But none of it's as of now real... He could just hunker down in the space between each heartbeat and make each heartbeat a wall and live in there. Not let his head look over. What's unendurable is what his own head could make of it all. What his head could report to him, looking over and ahead and reporting. But he could choose not to listen... He hadn't quite gotten this before now, how it wasn't just the matter of riding out cravings for a Substance: everything unendurable was in the head, was the head not Abiding in the Present but hopping the wall and doing a recon and then returning with unendurable news you then somehow believed.

I've come to draw, or at to emphasize, a distinction separating two realms between which I divide my time: real-land and head-land. Real-land is the physical world, occupied by myself and billions of equally real others, in which my fingers strike a series of keys and a monitor displays strings of text corresponding to these keystrokes. Head-land is the world in which I construct an image of what this sentence will look like when complete, what this paragraph will look like when complete and what this entire post will look like when complete. And it doesn't stop there: in head-land, the finished post is already being read, readers are reacting, readers are (or aren't) responding and the resulting conversations are, for better or for worse, playing themselves out. In head-land, the thoughts I've translated into words and thus defined and developed in this post are already shaping the thoughts to be explored in future posts, the composition of which is going on there even now.

Head-land is the setting of our predictions. When deciding what actions to take in real-land, we don't base our choices on the possible actions' end results in real-land — by definition, we don't know what will happen in real-land until it already has — but on their end results in head-land. The obvious problem: while real-land is built out of all the information that exists everywhere, head-land is built not even out of all the information in a single brain — and that isn't much — but of whichever subset of that brain's information happens to be currently employed. Hence the brutally rough, sometimes barely perceptible correspondence between head-land and real-land in the short term and the nearly nonexistent correspondence between them in the long.

One might grant that while responding that running models in head-land is nevertheless the best predictor of real-land events that any individual has. And that's true, but it doesn't change our apparent tendency to place far more trust in our head-land models than their dismal accuracy could ever warrant. To take but one example: we really seem to believe our own failures in head-land, head-land being the place we do the vast majority — and in some cases, all — of our failing. How many times has someone entertained the dream of, say, painting, but then failed in head-land — couldn't get a head-land show, say, or couldn't even mix head-land's colors right — and abandoned the enterprise before beginning it? How many times has someone started painting in real-land, gotten less-than-perfect results, and then extrapolated that scrap of real-land data into a similarly crushing head-land failure? Even established creators are vulnerable to this; could the novelist suffering from a bout of "writer's block" simply be the unwitting mark of a head-land vision of himself unable to write? The danger of head-land catastrophes that poison real-land endeavors looms over every step of the path. The possibility of being metaphorically laughed out of the classroom, though probably only illusory to begin with, never quite leaves one's mind. The same, to a lesser extent, goes for experiencing rather than creating; someone who refuses to listen to a new album, sample a new cuisine, watch a new film or visit a new art exhibition on the excuse that they already "know what [they] like" appear to have seen, and believed, their head-land selves listening, eating or viewing with irritation, repulsion or boredom.

That most of what we get worked up about exists in our imaginations and our imaginations only is less a fresh observation than the stuff of a thousand tired aphorisms. No battle plan survives the first shot fired. You die a thousand deaths awaiting the guillotine. Nothing's as good or as bad as you anticipate. You never know until you try. We have nothing to fear but fear itself. Fear is the mind-killer. It's all in your mind. Don't look down. Quoth John Milton, "The mind is its own place, and in it self, can make a Heaven of Hell, a Hell of Heaven." The more time I spend in head-land, the less time I feel like I should spend in head-land, because its awful, discouraging predictions are practically never borne out in real-land. (Even when one comes close, head-land seems to fail to capture the subjective experience of failure, which I find is either never failure qua failure or always somehow psychologically palliated or attenuated.) Experiencing a disaster in real-land is one thing, and its negative effects are unavoidable, but experiencing hypothetical head-land disasters as negative mental effects in real-land — which I suspect, we all do, and often — would seem to be optional.

Consider global economic woes. While you more than likely know a few who've had to take cuts in their incomes or find new ones, it's even likelier that you're experiencing all the degradations of destitution in head-land even as your real-land income has not and will not substantially shrink. You're living out the agonies of fumbling with food stamps in a long, angry grocery line despite the fact that you'll never come close. When one is starved for real-land information, one's head-land self gets hit with the worst possible fate. I hear of someone dying in a gruesome real-land freak accident, and I die a dozen times over in more gruesome, freakier head-land accidents. I visit a remote, unpopulated real-land location and my head-land map contracts, desolately and suffocatingly, to encompass only my lonely immediate surroundings. (Then my glasses fall off and break. But there was time! There was time!) I dream up an idea for implementation in real-land, but even before I've fully articulated it my head-land self is already busy enduring various ruinous executions of it. In head-land, worst-case scenarios tend to become the scenarios, presenting huge, faultily-calculated sums of net present real-land misery.

Fear of the ordeals that play out in head-land is a hindrance, but the paralysis induced by the sheer weight of countless accumulated hypothetical propositions is crippling. Even riding high on the hog in real-land is no bulwark against the infinite (and infinitely bad) vicissitudes of head-land. Say you're earning a pretty sweet living playing the guitar in real-land. But what if you'd been born without arms? Or in a time before the invention of the guitar? Or in a time when you would've died of an infection before reaching age twelve? Then you sure wouldn't be enjoying yourself as much. And what if you lose your arms in a horrific fishing accident ten years down the line? Or if you suddenly forget what a guitar is? Or if you die of an infection anyway? Despite the fact that none of these dire possibilities have occurred — or are even likely to occur — they're nonetheless inflicting real-land pain from across the border.

I call this phenomenon a blooming what if plant, beginning as the innocuous seed of a question — "What if I hadn't done or encountered such and such earlier thing that proved to be a necessary condition to something from which I enjoy and profit now?" — and sprouting rapidly into a staggeringly complex organism, its branches splitting into countless smaller branches which split into yet more branches themselves. More perniciously, this also happens in a situation-specific manner; namely, in situations whose sub-events are particularly unpredictable. The classic example would be approaching the girl one likes in middle school; the possible outcomes are so many and varied, at least in the approacher's mind, that the what-ifs multiply dizzyingly and collectively become unmanageable, especially if his strategy is to prepare responses to all of them. It's no accident that those never-get-the-girl mopes in movies spend so much time vainly rehearsing conversations in advance, and that doing the same in life never, ever works. There's a line to be drawn between the guys in junior high who could talk the girls up effortlessly and the ones who seized up merely contemplating it. I suspect the difference has to do with the ratio of one's relative presence in head-land versus that in real-land.

I would submit that, whatever their results, the dudes who could walk right up to those girls and try their luck habitually spent a lot more time in real-land than in head-land. They probably weren't sitting around, eyes fixed on their own navels, building elaborate fictions of inadequacy, embarrassment and ridicule; if they were, wouldn't they have been just as paralyzed? They appeared to operate on a mental model that either didn't conjure such dire possibilities or, if it did, didn't allow them any decisionmaking weight. "So the chick could turn me down? So what? What if space aliens invade and destroy the Earth? I don't know what'll happen until I try."

This brings up something else Wallace wrote and thought about — equivalent verbs for him, I think — though not, as I dimly recall, in Infinite Jest. In his sports journalism, of which he wrote some truly stunning pieces, he kept looping back to the issue of the correlation and possible causal connection between great athletes' brilliant physical performance and their astonishing unreflectiveness in conversation and prose. I'm thinking of Wallace's profile of Michael Joyce, a not-quite-star tennis player who has no knowledge or interests outside the game and couldn't even grasp the thundering sexual innuendo on a billboard ad. I'm thinking of his review of Tracy Austin's autobiography, a cardboard accretion of blithe assertions, unreached-yet-strongly-stated conclusions and poster-grade sports clichés. What must it be like, Wallace asked, to speak or hear prases like "step it up" or "gotta concentrate now" and have them actually mean something? Is the sports star's nonexistent inner life not the price they pay for their astonishing athletic gift, but rather its very essence?

One can say many things about bigtime athletes, but that they live in their heads is not one of them. I'd wager that you can't find a group that spends less time in head-land than dedicated athletes; they are, near-purely, creatures of real-land. The dudes who could go right up to the ladies in seventh grade seemed to be, in kind if not in magnitude, equally real-land's inhabitants. It comes as no surprise that so many of them played sports and weren't often seen with books. And not only were they undaunted by the danger (possibly because unperceived) of crushing humiliation, I'd imagine they were inherently less vulnerable to crushing humiliation in the first place, because crushing humiliation, like theoretical arm loss and imagined endeavor failure, is a head-land phenomenon. Humiliation is what makes a million other-people-are-thinking-horrible-thoughts-about-me flowers bloom — but only in head-land. The impact can only hit so hard if one doesn't spend much time there, because in real-land, direct access to another's thoughts is impossible. In head-land, one can't, as their creator, help but have direct access to everyone else's thoughts, and thus if a head-land resident believes everyone's disparaging him, everyone is disparaging him. "So what if they're thinking ill of me?" a full-time real-land occupant might ask. "I can't know that for sure, and besides, they're probably not; how often do you think, in a way that actually affects them, about someone who's been recently embarrassed?"

But there's a problem: saying someone "lives in their head" is more or less synonymous with calling them intelligent. "Hey, look at that brainy scientist go by, lost in thought; the fellow lives in his head!" As for professional athletes, well... let's just acknowledge the obvious, that professional athleticism is not a byword for advanced intellectual capacity. (Wallace once lamented the archetypal "basketball genius who cannot read".) So there's clearly a return to time spent in head-land, and arguing for the benefits of head-land occupancy even to nonintellectuals is a trivial task. How, for instance, would we motivate ourselves without reference to head-land? How could we envision possibilities and thus choose which ones we'd like to realize without seeing them in head-land? Surely even the most narrowly-focused, football-obsessed football player has watched himself polish a Super Bowl ring in head-land. Why else would he strive for that outcome? Head-land is where our fantasies happen, where our goals are formulated, and is that a function we can do without?

Hokey as it sounds, I do consider myself a somewhat "goal-oriented" person, in that I burn a lot of time and mental bandwidth attempting to realize certain head-land states. But, as the above paragraphs reveal, I often experience head-land backfire in the form of discouraging negative imaginings rather than encouraging positive ones. Here I could simply pronounce that I will henceforth only use head-land for envisioning the positive, but it's not quite that easy; I can think of quite a few badly-ending head-land scenarios that I'm happy to experience there — and only there — and take into account when making real-land decisions. The head-land prediction that I'll get splattered if I walk blindly into traffic comes to mind.

And I'm one of the less head-land-bound people I know! I wouldn't be writing this post if I didn't struggle with the damned place, but traits like my near-inability to write fiction suggest that I don't gravitate toward it as strongly as some. Still, I feel the need to minimize the problems that spring forth from head-land without converting myself into an impulsive dumb beast. The best compromise I have at the moment is not necessarily to stem the flow of predictions out of head-land, but simply to ignore the bulk of their content, to crank down their resolution by 90% or so. Since the accuracy of our predictions drops so precipitously as they extend forward in time and grow dense with specifics, they'd mostly lose noise. Noise simply misleads, and attenuating what misleads is the point of this exercise.

There are countless practical ways to implement this. One quick-and-dirty hack to dial down head-land's effect on your real-land calculations is to only pay attention to head-land's shadow plays to the extent that they're near your position in time. If they have to do with the distant future, only consider their broadest outlines: the general nature of the position you envision yourself occupying in twenty years, for instance, rather than the specific event of your buxom assistant bringing you just the right roast of coffee. If they have to do with the past, near or distant, just chuck 'em; head-land models tend to run wild with totally irrelevant oh-if-only-things-had-been-different retrodictions, which are supremely tempting but ultimately counterproductive. (As one incisive BEK cartoon had a therapist say, "Woulda, shoulda, coulda — next!") If they have to do with the near future, they're more valuable, and the nearer the future they deal with, the better you would seem to do to pay attention to them.

The concept behind this is one to which I've been devoting thought and practice lately: small units of focus. Alas, this brings us to another set of bromides, athletic and otherwise. One step at a time. Just you and the goal. Break it down. Don't bite off more than you can chew. The disturbing thing is how well operating on such a short horizon seems to work, at least in certain contexts. I find I actually do run better when I think only of the next yard, write better when I think only of the next sentence and talk better when I think only of the subject at hand. When my mind tries instead to load the entire run, the entire essay or the entire conversation, head-land crashes. (This applies to stuff traditionally thought of as more passive as well: I read more effectively when I focus on the sentence, watch films more effectively when I focus on the shot, listen to music more effectively when I focus on the measure.) When Wallace writes about "the head not Abiding in the Present but hopping the wall and doing a recon and then returning with unendurable news" and "[hunkering] down in the space between each heartbeat and [making] each heartbeat a wall and [living] in there", I think this is what he means.

Ignoring all head-land details past a certain threshold, de-weighting head-land predictions with their distance in the future and focusing primarily on small, discrete-seeming, temporally proximate units aren't just techniques to evade internal discouragement, either; they also guard against the perhaps even more sinister (and certainly sneakier) forces of complacency. While failure in head-land can cause one to pack it in in real-land, success in head-land, which is merely a daydream away, can prevent one from even trying in real-land. I can't put it better than Paul Graham does: "If you have a day job you don't take seriously because you plan to be a novelist, are you producing? Are you writing pages of fiction, however bad? As long as you're producing, you'll know you're not merely using the hazy vision of the grand novel you plan to write one day as an opiate."

This is why I'm starting to believe that coming up with great ideas in head-land and then executing them in real-land may be a misconceived process, or at least suboptimally conceived one. How many projects have been forever delayed because the creator decided to wait until the "idea" was just a little bit better, or, in other words, until the head-land simulation came out a little more favorably? It's plausible that this type of stalling lies at the heart of procrastination: one puts the job off until tomorrow because the head-land model doesn't show it as turning out perfect today, never mind the facts that (a) it'll never be perfect, no matter when it's started and (b) it's unlikely to turn out better with less time available for the work, especially given the unforeseen troubles and opportunities. I provisionally believe that this a priori, head-land idea stuff can be profitably be replaced with small-scale real-land exploratory actions that demand little in the way of time or resource investment. Rather than executing steps one through one hundred in head-land, execute step one in real-land; if nothing else, the data you get in response will be infinitely more reliable and more useful in determining what step two should involve. Those dudes in middle school knew this on some basic level: you just gotta go up to the girl and say something. It's the only gauge you have of whether you should say something more, and of what that something should be. It's all about hammering in the thin end of the wedge.

For what it's worth, I've found this borne out in what little creation I've done thus far. I've reached the point of accepting that I don't know — can't know — how a project's going to turn out, since each step depends on the accumulated effects of the steps that preceded it. All I can do is get clear on my vague, broad goal and put my best foot forward, keeping my mind open to accept all relevant information as it develops. When I started my first radio show, I had a bunch of head-land projections about how the show would be, but in practice it evolved away from them in real-land rather sharply — and, I think, for the better. When I started another one a year later, I knew to factor in this unforeseeable real-land evolution from the get-go and thus kept my ideas about what it was supposed to me broad, flexible and small in number, letting the events of real-land fill in the details as they might. With a TV project only just started, I've tried my hardest to stay out of head-land as much as possible; the bajillion variables involved would send whatever old, buggy software my head-land modeler uses straight to the Blue Screen of Death. (Yes, our brains are Windows-based.) Even if it didn't crash, it's not as if I'd be getting sterling predictions out of it. I have, more grandly speaking, come to accept much more of the future's unknowability than once I did; that goes double for the future of my own works. Modeling a successful work in head-land now seems a badly flawed strategy, to be replaced by taking small steps in real-land and working with its response.

I could frame this as another rung in thet climb from a thought-heavier life to an action-heavier life. of approaching and affecting the world as it exists in real-land rather than as it is imagined in head-land. I've nevery been what one would call an idealist and I suppose I'm drawing no closer to that label. Some regard flight from idealism as flight toward cynicism, but it's cynicism I'm been fleeing as well, perhaps even primarily; what is cynicism, after all, but a mistaken reliance on pessimistic head-land conclusions?

 

New to LessWrong?

New Comment
68 comments, sorted by Click to highlight new comments since: Today at 3:03 PM

Excellent. This is precisely why I'm always ranting here about actually trying things, and deferring True Theory until you've first had Useful Practice. Otherwise, it's all too likely your models are bullshit based on previously-learned bullshit and entirely unrelated to what would actually happen if you Just Did It Already.

One quibble:

The classic example would be approaching the girl one likes in middle school

I suspect that this might've been better phrased as "The classic example of a guy approaching the girl he likes in middle school", as the way it's phrased now implies the reader is a heterosexual male, and is less inclusive than it'd otherwise be. (It also could've been phrased as "The classic example of me approaching the girl I liked in middle school".)

I think the rest of your statements about that scenario didn't imply the reader was the one doing it, but I'm not 100% positive of that.

Thanks. I expect most of my posts here will be more Useful Practice than True Theory, but only just; my hope is that the Less Wrong community won't spare the downvotes if I stray too far from rationality and too close to self-help territory.

In software development, this is known as being "Agile." Originally, software was designed mostly in head-land (a "Big Design Up Front"), but gradually a different process was pushed wherein a smaller, prototype design would first be constructed, then evaluated for its effects in real-land, and then improved upon, repeatedly. I find it interesting that unlike in the world of sports, where "one step at a time" can be almost universally agreed upon, software development is rife with controversy over whether "Agile software development methods" have any real advantages.

"Prototype" isn't an accurate description of what Agile practitioners do. (To save time in later discussion, note that big-A "Agile" refers to people who self-identify as users of Agile techniques; we could use small-a "agile" to refer to the desirable properties a project is supposed to acquire through use of Agile techniques.)

The term implies something that will be thrown away, whereas an Agile practitioner aims to build very rapidly a skeletal version that will be kept, improved (iteratively) and fleshed out (incrementally).

You're quite right about it being controversial.

What interest (if any) does this (LW) community have in applying rationality (epistemic and instrumental) to the process of developing software ?

There's a landmark paper by Parnas et al. on this topic, which starts off with "Most of us like to think of ourselves as rational professionals. However, to many observers, the usual process of designing software appears quite irrational..." The first three sections by themselves make for good reading on the distinction between epistemic and instrumental rationality.

I came across LW mostly by accident - I stumbled onto the series of articles about quantum physics, found myself glued to the screen until I got to the end, then read the primer on Bayes and its follow-up on EY's site before coming back to sample random sections of LW. I was vaguely aware of Bayes' theorem previously but the promise of improving my thinking got my attention.

Strategies for software development in general, and Agile in particular, are currently my main area of professional interest. Some of LW's readership apparently have some interest in developing software that thinks, and this would seem to entail getting real good at developing software in general... but I didn't find much discussion of that particular topic in my random sampling (and no "software" tag to point me to it if it exists). Hence my question above...

I find it interesting that unlike in the world of sports, where "one step at a time" can be almost universally agreed upon, software development is rife with controversy over whether "Agile software development methods" have any real advantages.

Let's not forget that Agile refers to a particular software development methodology (or family thereof) and people can easily implement a lot of good features of Agile without actually following that methodology. See also

That "see also" link appears to be written by someone whose knowledge of Agile is at best skimpy.

Well it was written by someone at Yahoo! who used Agile and its variants for many years and is fairly respected in the web development community.

But you are entitled to your opinion.

Sounds like the concept of "agility" could be generalized richly indeed.

IAWYC and your examples reflect my own experience.

But unless there's some difference between the amount of planning, thinking, and daydreaming necessary for our ancestral environment and the amount of those things necessary now, evolutionary psychology at least provides some weak evidence that humans on average plan, think, and daydream about the right amount. That suggests that maybe the costs to asking people on dates and getting stuff done is balanced by benefits elsewhere.

This is an important consideration. I just can't figure out how to test it.

Let's please not all focus on the

Those dudes in middle school knew this on some basic level: you just gotta go up to the girl and say something.

part for this post, yes?

First off, welcome to Less Wrong! Check out the welcome thread if you haven't already.

You have a good writing style, but I hope you'll pardon me if I make a few suggestions based on the usual audience for Less Wrong posts:

Typically, a post of this length should be broken up into a sequence; you run the risk of "too long; didn't read" reactions after 1000 words, let alone 3000, and the conversation in the comments is usually sharper if the post has a single narrow focus. Usually, the analysis of a situation and the recommendations become separate posts if both are substantial.

Secondly, with the notable exception (sometimes) of P.J. Eby, we're often mistrustful of theories borne of introspection and anecdotes, and especially of recommendations based on such theories. There's therefore a norm of looking for and linking to experimental confirmation where it exists, and being doubly cautious if it doesn't. In this case, for instance, you could find some experimental evidence on choking that supports your thesis. This also forces you to think carefully about what sort of things your model predicts and doesn't predict, since at first glance it seems vague to the point of danger. The more specific you can get about these phenomena, the more useful your post will be.

Although I agree that a theory born of empirical evidence is better than one born of introspection, I think it is kind of dangerous to introspect, develop a theory, and then when you're posting it on Less Wrong look for some evidence to support it so that you can say it's empirical. It risks reducing The Procurement of Evidence to a ritual.

See, the problem is, he could probably tie the evidence about choking into his theory. But if he had the opposite theory, he could probably tie studies like the ones showing mental practice can improve sports performance and the one showing that problem-solving areas of the brain are highly active when we daydream in to support that. That means that the fact that he can find a tangentially related study doesn't really make it more likely that the post is true. It'd just make us feel all nice and empirical

The matter would be different if there happened to be a study about this exact topic, or if there had been some study that had inspired him to come up with this theory. But "come up with theory, find supporting evidence" seems dangerous to me.

Isn't the answer simply that one shouldn't misinterpret what it means for evidence to be supporting?

Oh, good point. I think of "come up with theory, think about what it implies, look for evidence one way or the other" as the ideal, but the difficulty is that confirming information is more salient in my memory than disconfirming.

On the other hand, filtered evidence is still evidence, and a lack of outside evidence can be a sign that there's no good confirming evidence. (Or, in this case, just a sign that the poster is new around here.)

Secondly, with the notable exception (sometimes) of P.J. Eby, we're often mistrustful of theories borne of introspection and anecdotes, and especially of recommendations based on such theories.

I think you underestimate just how mistrustful of introspection and armchair theorizing I am. For example, I'm certainly mistrustful of the armchair theorizing you're doing right now. ;-)

In the specific area of akrasia and practical arts of motivation, I am especially mistrustful of the theorizing that accompanies most psychology experiments I read about -- even when I bypass the popularized version and go to the original paper.

The typical paper I end up seeing combines a wild speculation with a spectacularly underwhelming actual result, due in large part to stupid methodological mistakes, like trying to statistically analyze people as groups, rather than ever finding out what individuals are doing, or controlling for what those individuals do/don't understand about a procedure, or if they're even following the procedure they're supposed to.

If you do the exact same thing with 100 people, you will be rather lucky to not get 100 different results. So to get more than an interesting anecdote out of an experiment, you'd better be able to vary what you're doing on a more individual basis.

Which is also why it's usually ill-advised to try to take psych-paper speculations and turn them into practical advice, versus copying what somebody reasonably similar to you has anecdotally done and received results from. The latter is FAR more likely to directly translate to something useful.

It seems to me that this is an excellent example of considering the construction of a predictive model of the internal workings of a black box to be "real science" while the dominant paradigm has become one of treating statistical models of the black boxes as performing random number generator controlled transformations of inputs into outputs to be "real science".

To be fair, not all psychological statistics are bunk. It's just that it's incredibly slow the way it's done, and all you can get from any one experiment is a vague idea like, "thinking concretely about a task makes it more likely you'll do it." Direct marketers knew that ages ago.

Direct marketers knew that ages ago.

For certain values of "knew".

Science has different epistemic standards.

ETA: though you're correct to point out that the papers mentioned above don't seem to follow them very well.

For certain values of "knew".
Science has different epistemic standards.

The marketers knew it well enough that the scientists should have studied it. That they didn't was a serious epistemic failing; it's not clear that these different standards are better. Denying something on the grounds that you haven't studied it enough and refusing to study it is almost a fully general counterargument.

Of course. Unfortunately for people needing personal and practical applications, science isn't caught up and may never be, precisely because they're not looking for the same kinds of things. (They're looking for "true" rather than "useful".)

I couldn't parse this. Could you maybe explain it in multiple (shorter) sentences?

He's saying that the dominant paradigm in the "soft" sciences is that you treat your subjects as black boxes performing semi-random transformation of inputs into outputs... without ever really trying to understand (in a completely reductionist way) what's going on inside the box.

The "hard" sciences don't work that way, of course: you don't need to test a thousand different pieces of iron and copper, just to get a statistical idea of which one maybe has a bigger heat capacity, for example.

To continue the analogy, it's as if the soft sciences have no calorimeters, thermometers, and scales with which to actually measure the relevant thing, and so instead are measuring something else that only weakly correlates with the thing we want to measure.

PCT, btw, proposes that behavior -- in the sense of actions taken by organisms -- is the "weakly correlated" thing, and that perceptual variables are the thing we actually want to measure. And that, with appropriate experimental design, we can isolate and measure those variables on a per-subject basis, eliminating the need to test huge groups just to get a vague idea of what's going on.

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

This doesn't make sense to me; sharper predictive success becomes unfavorable for publication? If this was written publicly, can you provide the source?

From Teaching Dogma in Psychology, a lecture by Dr. Richard Marken, Associate Professor of Psychology at Ausberg College:

Psychologists see no real problem with the current dogma. They are used to getting messy results that can be dealt with only by statistics. In fact, I have now detected a positive suspicion of quality results amongst psychologists. In my experiments I get relationships between variables that are predictable to within 1 percent accuracy. The response to this level of perfection has been that the results must be trivial! It was even suggested to me that I use procedures that would reduce the quality of the results, the implication being that noisier data would mean more.

The lecture was Dr. Marken's farewell speech. After five years of unsuccessfully trying to interest his peers in the improved methods made possible by PCT (most lost interest when they understood enough to realize that it was a major paradigm shift), he chose to resign his professorship, rather than continue to teach what he had come to believe (as a result of his PCT studies) was an unscientific research paradigm. As he put it:

It would be like having to teach a whole course on creationism and then having a “by the way, this is the evolutionary perspective” section. Why waste time on non-science? From my point of view, most of what is done in the social sciences is scientific posturing and verbalizing.

It's an interesting read, whether you agree with his conclusions or not. Not a lot of people with the intellectual humility regarding their own field to accept the ideas of an outsider, question everything they've learned, and then resign when they realize they can't, in good conscience, teach the established dogma:

So my problem is what I, as a teacher, should do. I consider myself a highly qualified psychology professor. I want to teach psychology. But I don’t want to teach the dogma, which, as I have argued, is a waste of time. So, do I leave teaching and wait for the revolution to happen? I’m sure that won’t be for several decades. Thus I have a dilemma—the best thing for me to do is to teach, but I can’t, because what I teach doesn’t fit the dogma. Any suggestions?

Edit to add: it appears that, 20 years later, Dr. Marken is now considering a return to teaching, as he reports on his 25 years of PCT research.

Thanks for the citation. I know it's a bother to do so, but I'd appreciate it if you linked your sources more often when they're publicly available but unfamiliar to the rest of us.

And the results were never published in any form? The revolutionary results were rejected by all publication venues in the field? This story is a lie, an excuse of this whining-based field:

It's the government's fault, that taxes you and suppresses the economy - if it weren't for that, you would be a great entrepreneur. It's the fault of those less competent who envy your excellence and slander you - if not for that, the whole world would pilgrimage to admire you. It's racism, or sexism, that keeps you down - if it weren't for that, you would have gotten so much further in life with the same effort.

And the results were never published in any form?

In the second link I gave, Marken self-cites 6 of his papers that were published in various journals over the years. See page 8 of the PDF. I don't know if there are any more publications than that, since Marken said he was only giving a 45-minute summary of his 25 years' work. (Oh, and before he learned PCT, he wrote a textbook on experimental design in psychology.)

However, I suspect that since you missed those bits, there's a very good chance you didn't read either of the links -- and that would be a mistake, if your goal is to understand, rather than to simply identify a weak spot to jump on. You have rounded a very long distance to an inaccurate cliche.

See, Marken never actually complained that he couldn't get published, he complained that he could not abide teaching pre-PCT psychology, as he considered it equivalent to pre-Darwin biology or pre-Galileo physics, and it would therefore be silly to spend most of a semester teaching the wrong thing in order to turn around at the end and explain why everything they just learned was wrong. That was the issue that led him to leave his professorship, not publication issues.

I was about to write that Marken should have considered getting an affiliation to an engineering department. Engineers love them some closed-loop systems, and there would probably have been scope for research into the design of human-machine interactions. Then I read his bio, and learned that that was pretty much what he did, only as a consultant, not an academic.

The words you used in the original comment don't lend themselves to the new interpretation. There was nothing about teaching in them:

(One psychology professor wrote how, once he began using PCT models to design experiments, his results were actually too good -- his colleagues began advising him on ways to make changes so that his results would be more vague and ambiguous... and therefore more publishable!)

There's no contradiction between what pjeby wrote in his original comment and what he wrote subsequently about Marken. In this exchange, you seem to me to be suffering from a negative halo effect -- your (possibly fair) assessment of pjeby's interests and goals in writing on this site have made you uncharitable about this particular anecdote.

You are right, I didn't reread the first Eby's comment in full before replying on the second, losing the context, and now that I did, even my first comment seems worded incorrectly.

But the subsequent comment was supposed to provide support for the original comment. (It was proffered in response to a request for such support). It therefore seems reasonable to criticise it if it fails to do so, doesn't it?

ETA: Apologies. This comment was stupid. I should learn to read. Withdrawn.

ETA2: Please don't upvote this! I shouldn't be able to gain karma by saying stupid things and then admitting that they were stupid. If we want to incentivise renouncing stupid comments, we should presumably downvote the originals and upvote the renunciations so that the net effect is zero. (Or, if you think the original was correct, I guess you could upvote it and downvote the renunciation.)

But the subsequent comment was supposed to provide support for the original comment.

It did. See the first excerpt pjeby quoted in reply to orthonormal's query.

Sorry. You're right. I'm an idiot.

Yes, there's nothing about teaching there. What's your point? The only reason I mentioned the teaching aspect is to debunk the nonsense you were spewing about him not being able to be published.

(For someone who claims to want to keep discussion quality high, and who claims to not want to get involved in long threads with me, you sure do go out of your way to start them, not to mention filling them misconceptions and projections.)

Why doesn't it make sense? If "good results" in a field tend to be mediocre predictors at best, and then you submit an result with much, much better predictive power than anyone in the field could ever hope for, look at it from the perspective of the reviewer. Wouldn't such an article be strong evidence that you're being tricked or otherwise dealing with someone not worthy of more attention? (Remember the cold fusion case?)

And even if it doesn't make rationalist sense, isn't it understandable why academics wouldn't like being "one-upped" so badly, and so would suppress "too good" results for the wrong reasons?

And even if it doesn't make rationalist sense, isn't it understandable why academics wouldn't like being "one-upped" so badly, and so would suppress "too good" results for the wrong reasons?

It's conceivable. But anyone who went into academia for the money is Doing It Wrong, so I tend to give academics the benefit of the doubt that they're enthusiastic about pursuing the betterment of their respective fields.

[It] sounds about as plausible to me as the idea that most viruses are created by Norton to keep them in business.

ETA: hmm... awkward wording. "It" above refers to the preceding hypothesis about academics acting in bad faith.

I have personally attended a session at a conference in which a researcher presented essentially perfect prediction of disease status using a biomarker approach and had his results challenged by an aggressive questioner. The presenter was no dunce, and argued only that the results suggested the line of research was promising. Nevertheless, the questioner felt the need to proclaim disbelief in presented results. No doubt the questioner thought he was pursuing the betterment of his field by doing so.

There's just a point where if someone claims to achieve results you think are impossible, "mistake or deception" becomes more likely to you than "good science".

Easy there. I'm not advocating conspiracy theories. But it's not uncommon for results to be turned down because they're too good. Just off the top of my head, how much attention has the sociology/psychology community given to the PUA community, after the much greater results they've achieved in helping men?

How long did it take for the Everett Many-Worlds Interpretation to be acknowledged by Serious Academics?

Plus, status is addictive. Once you're at the top of the field, you may forget why joined it in the first place.

Thanks - I think the first half of that was helpful.

Thanks; duly noted. I plan to write a few posts on the "road testing" of Less Wrong and Less Wrong-y theories about rationality and the defeat of akrasia, so these are helpful pointers.

"Typically, a post of this length should be broken up into a sequence; you run the risk of 'too long; didn't read' "

Possibly true in general, but I found this article so fascinating I didn't have any trouble getting through it.

I finally created an account just so I could 'up-vote' this post, which I enjoyed. I think it shows a depth of thought and introspection that is very helpful. Perhaps this post could be the start of a series?

I'd like to make it that, but we'll see what I can do.

Reminds me of this Ask Metafilter thread:

As far as I am aware, I am a mentally healthy, well-adjusted, and sane person with no disorders. But I have a strange, fairly innocuous quirk which seems beyond my control and I'm curious about it...

When I think of / remember something embarrassing from my life, I compulsively make some kind of noise. It seems to happen unconsciously, before my censor can catch it and stop myself (it even happens when I am in a quiet or inappropriate place).

[a bunch of other folks chime in confirming that they have similar experiences, and one user writes]

I am convinced it's the people who don't do this who go on to be rich, leaders, etc.

Be careful with differentiating cause and effect. I suspect we often find ourselves spending time in head-land because we want to avoid some uncomfortable real-land experience. The example of talking to the girl: it's a daring situation so we rather compensate by trying to think through all possible scenarios instead of facing the fear and just doing it. To prove that this is the case consider a pleasurable activity like playing a video game. In this case you won't stay much time in head-land but rather turn on the game and play some rounds in real-land without thinking too much about it.

I think it has a lot to do with Motivated Stopping and Motivated Continuation.

We keep on thinking if we want to avoid painful action, we do less thinking when the activity is pleasurable.

I found this an interesting analysis, even if based on introspection. That you've personally found success with it (a number of times apparently) really notches up my consideration of the idea and willingness to try it, in line with what P.J. Eby said.

One idea I'd like to add, which may be implied in your post but I'm not sure, involves the "wrong jungle" effect. Stephen Covey broke down operation (as an organization or individual, in which case imagine tiny people in your brain) into the following parts: the labor are the people out at front with the machetes hacking through the brush, the management are the people standing behind running seminars on machete use, repairing machets, supplying water, and bringing out a boom box for something to listen to, and the leadership is the person who's climbed a nearby tall tree and calls out "Wrong Jungle!"

I've had a few experiences where I'll be working away on something, focused on the relatively short term (though more in head-land than your post advocates), making fair progress. Then one day I'll be musing about the farther future, when playing things through reveals to me alternative courses of actions which are far more likely to be effective.

Sometimes I find that when I get a seemingly good idea for something to do, a project we'll say, I get excited at the opportunity and start in on it, but when I hold myself back and ask myself to think about where this is roughly going to lead, I find that the whole series of projects is not all that great of an idea. This is especially the case when the one single project is nice and easy looking and the alternative pathway is much harder and more unpleasant. The affective nature of the near choices can prohibit me from considering the likely merits of the long term paths. An example could be looking far forward and realizing that the fun task of putting together an AI is disastrous unless you can confidently make it Friendly, like i detected here. The kicker is that if you find you are in the "wrong jungle" it may be that months or years of work are largely useless, which is all the more reason to put serious and at least moderate effort into making sure you are in the "right jungle."

My own assimilation of your post for now is that while head-land may be great for thinking what to do in the long term (remembering to keep the plan vague), it might be quite unwelcome in the details of the realization of the plan ("Hey I can see from up on this tree a slight dip in the tree canopy, there might be a drop there, and just maybe some dangerous animals sleep in that dip"). While you didn't seem to draw attention to this, you didn't proscribe long term thinking as a whole either so I'm not assuming you haven't thought of this.

I upvoted this partly because it was really well written and I would love to see more articles of this caliber.

As for the topic... I guess I don't disagree on any particular point and I think the insights are good to note. Personally I seemed to head in the opposite direction when faced with this problem:

One might grant that while responding that running models in head-land is nevertheless the best predictor of real-land events that any individual has. And that's true, but it doesn't change our apparent tendency to place far more trust in our head-land models than their dismal accuracy could ever warrant.

Instead of throwing out the head-land models and simulations as not helpful I look for ways to make the head-land models more accurate. The success of specific head-land models is more or less easy to measure: Did the predictions occur? The solution is two-pronged: look for better accuracy and ditch the apparently accuracy-bias.

The danger of head-land catastrophes that poison real-land endeavors looms over every step of the path. The possibility of being metaphorically laughed out of the classroom, though probably only illusory to begin with, never quite leaves one's mind.

Agreed; a major obstacle to measuring the success of head-land predictors comes when the predictions themselves affect the outcome of real-land. Namely, both fear of failures imagined and the relaxing opium of daydreams.

In my experience, it is possible to shoo the metaphorical laughter away. Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results. Instead of responding to imagined failure with real-land fear, forge onward with the intent of measuring the success of your head-land.

Fearing head-land failures to the degree of not acting in the real-world truly is poison. But shutting off our best predictor because it may predict inaccurate failure seems to be letting a valuable tool fall away. It is better to not emulate than to not act but is it not possible to increase our accuracy?

I suppose my point can be boiled into this: My head-land has been known to guess correctly. Are these successes a false pattern? Are they evidence of a talent that can be honed into something useful to my real-land self? My head-land is telling me the latter.

Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results.

This is wise. Getting the necessary distance would indeed work, as would improving head-land accuracy, though I'm dubious about the extent to which it can be improved. In any case, I'm not quite to either goal myself yet. And if your own head-land making accurate predictions, that's a good thing; I just can't get those kinds of results out of mine. Yet.

Another random comment: Head-land is large and can be split into distinct patterns of behavior. Simulations about potential mates are probably going to be on different emotional circuits than strategizing about chess. (Unless, of course, you play chess differently than I do...) My hunches tell me that the chess simulations are going to be a little more accurate.

Rationality certainly helps when testing the accuracy of head-land. My math teacher used to warn me about turning my brain off when working through math problems. If the answer didn't make intuitive sense check my work for bizarre mistakes. It turns out my head-land simulation of basic math problems is relatively accurate. Knowing its level of accuracy is an excellent tool for determining if we're in the wrong jungle.

Could I get step by step instructions on how to more active in real-land instead of head-land?

I second this request.

Could I get step by step instructions on how to more active in real-land instead of head-land?

No. Step-by-step instructions reside in head-land. (Maybe you know this and were just joking?)

No. Step-by-step instructions reside in head-land.

On reflection, that was glib. True, but glib, and not the entire truth.

In fact, I have learned techniques from certain personal development courses, specifically aimed at discovering and uprooting all the damaging head-stuff (by which I mean the subset that is damaging, not a claim that it's all damaging). But (a) it's just ordinary personal development stuff that I only have my personal experience with to tell, and (b) I can't communicate it anyway just by writing however many words.

Try more stuff in real-land.

You should check out the welcome thread and post an intro or something.

why don't you try zen, maybe you'll visit blissful now....

hmmm down woted,

zen is really a great technique, for staying in real-land. Just doing no thinking (or rather no pondering)

Hey Colin, I enjoyed reading this, head land is definitely a useful paradigm. In fact, it's so useful that it enables me to share two of my favorite experiences from head land:

1) Two characters in my head land will get into an argument or heated conversation over some point. Usually one of the characters is a future version of myself. What then happens is that one of the characters will make an extremely good point; quasi-irrefutable by the other. The person making the really good point is usually the aforementioned future version of myself. Then the other character will make the rejoinder that this point is irrelevant because this conversation is only happening in my head and is unlikely to ever occur in real life.

2) Two people will be having a conversation in a future scheme, and one of them is me. The conversation is going quite swimmingly, often full of only weakly deflected praise for my character. And then I remark how it is quite odd that this conversation is actually happening because it is exactly the same conversation that I had once envisioned having in my head. And then the other person will say, did you envision me saying this, too? And then I will say, yes.

Sometimes these head land occurrences make me laugh, sometimes they make me sad (because boys don't cry!). One interesting question is whether people are spending comparatively more time in head land now than at other periods of history, and what the implications would be if the answer is yes.

Lots of great stuff in this post. Don't have time to comment on anything in particular, but just wanted to say: this is the best-written piece I've ever seen on lesswrong. Keep writing.

This is well-written, and an organizational outline would make it better, so I know what are the main points, so I can evaluate the support for and remember them.

Minor typo : it seems there's a missing word in this sentence:

Head-land is the world in which I constructing an image of what this sentence will look like when complete

Thank y'kindly. I upvote any and all comments that correct mistakes that would've made me look like a sub-lingual doof otherwise.

I thought that was on purpose.

Nah; it was supposed to read "in which I construct." I just fumbled the editing.

Typo: Change "nevery" to "never" in the last paragraph.

[-][anonymous]15y00

I really enjoyed reading this.

Glad to hear it. I aim to please.