Eliezer_Yudkowsky comments on A Critique of Leverage Research's Connection Theory - Less Wrong

20 Post author: peter_hurford 20 September 2012 04:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 22 September 2012 02:01:53PM 20 points [-]

My recollection of my conversation with Geoff, at a Berkeley LW meetup, in Berkeley University in a building that had a statue of a dinosaur in it, is that it went like this. Disclaimer: My episodic memory for non-repeated conversations is terrible and it is entirely possible that there are major inaccuracies here. Disclaimer 2: This is not detailed enough to count as an engaged critique, and Geoff is not obliged to respond to it since I put very little effort into it myself (it is logically rude to demand ever-more conversational effort from other people while putting in very little yourself).

Geoff: I've been working on an incredible new mental theory that explains everything.

Eliezer (internally): That's not a good sign, but he seems earnest and intelligent. Maybe it's something innocuous or even actually interesting.

Eliezer (out loud): And what does it say?

Geoff: Well, I haven't really practiced it explaining it, and I don't expect you to believe it, but (explains CT)

Eliezer (internally): Well this is obviously wrong. Minds just don't work by those sorts of bright-line psychoanalytic rules written out in English, and proposing them doesn't get you anywhere near the level of an interesting cognitive algorithm. Maybe if he's read enough of the Sequences and hasn't invested too many sunk costs / bound up too many hopes in it, I can snap him out of it in fairly short order?

Eliezer (out loud): Where does CT make a different prediction from the cognitive science I already know that I couldn't get without CT?

Geoff: It predicts that people will change their belief to believe that their desires will be fulfilled...

Eliezer (internally): Which sounds a lot like standard cognitive dissonance theory, which itself has been modified in various ways, but we aren't even at the point of talking about that until we get out of the abstract-belief trap.

Eliezer (out loud): No, I mean some sort of sensory experience. Like your eyes seeing an apple fall from a tree, or something like that. What does CT say I should experience seeing, that existing cognitive science wouldn't tell me to expect?

Geoff: (Something along the lines of "CT isn't there yet", I forget the exact reply.)

Eliezer (internally): This is exactly the sort of blind alley that the Sequences are supposed to prevent smart people from wasting their emotional investments on. I wish I'd gotten this person to read the Belief and Anticipation sequence before CT popped into his head, but there's no way I can rescue him from the outside at this point.

Eliezer (out loud): Okay, then I don't believe in CT because without evidence there's no way you could know it even if it was true.

I think there might've also been something about me trying to provide a counterexample like "It is psychologically possible for mothers to believe their children have cancer" but I don't recall what Geoff said to that. I'm not sure whether or not I gave him any advice along the lines of, "Try to explain one thing before explaining everything."

Comment author: Geoff_Anders 22 September 2012 03:08:25PM 9 points [-]

If I recall correctly, I was saying that I didn't know how to use CT to predict simple things of the form "Xs will always Y" or "Xs will Y at rate Z", where X and Y refer to simple observables like "human", "blush", etc. It would be great if I could do this, but unfortunately I can't.

Instead, what I can do is use the CT charting procedure to generate a CT chart for someone and then use CT to derive predictions from the chart. This yields predictions of the form "if a person with chart X does Y, Z will occur". These predictions frequently do not overlap with what existing cognitive science would have one expect.

The way I could have evidence in favor of CT would be if I had created CT charts using the CT procedure, used CT to derive predictions from the charts, and then tested the predictions. And I've done this.

Comment author: Eliezer_Yudkowsky 23 September 2012 03:55:01AM 12 points [-]

These predictions frequently do not overlap with what existing cognitive science would have one expect.

What is an example of a case you've actually observed where CT made a falsifiable, bold, successful prediction? ("Falsifiable" - say what would have made the prediction fail. "Bold" - explain what a cogsci guy or random good human psychologist would have falsifiably predicted differently.)

Comment author: Geoff_Anders 23 September 2012 12:48:37PM *  6 points [-]

For at least 2 years prior to January 2009, I procrastinated between 1-3 hours a day reading random internet news sites. After I created my first CT chart, I made the following prediction: "If I design a way to gain information about the world that does not involve reading internet news sites that also does not alter my way of achieving my other intrinsic goods, then I will stop spending time reading these internet news sites." The "does not alter my way of achieving my other intrinsic goods" was unpacked. It included: "does not alter my way of gaining social acceptance", "does not alter my relationships with my family members", etc. The specifics were unpacked there as well.

This was prediction was falsifiable - it would have failed if I had kept reading internet news sites. It was also bold - cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior. And it was also successful - after implementing the recommendation in January 2009, I stopped procrastinating as predicted. Now, of course there are multiple explanations for the success of the prediction, including "CT is true" and "you just used your willpower". Nevertheless, this is an example of a faisifiable, bold, successful prediction.

Comment author: pjeby 24 September 2012 03:46:16AM 7 points [-]

cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior.

Your model of human psychologists needs updating, then. Books on hypnotism that I read when I was 11 discuss needs substitution, secondary gain, etc. that would be relevant to making such a prediction. Any good human psychologist knows to look for what gains a behavior produces.

Of course, maybe you meant "good (random human) psychologists", not "good, random (human psychologists)" - i.e., psychologists who study the behavior of random humans, rather than people who help individual humans... in which case, that's a really low bar for CT to leap over.

Also:

it would have failed if I had kept reading internet news sites.

This is also a really low bar, unless you specify how long you would stay away from them. In this case, three years is pretty good, but just getting somebody to stop for a few days or even a couple months is still a relatively low bar.

Comment author: shminux 23 September 2012 05:39:48AM -1 points [-]

Can't resist...

What is an example of a case you've actually observed where MWI made a falsifiable, bold, successful prediction? ("Falsifiable" - say what would have made the prediction fail. "Bold" - explain what a CI guy or random good human physicist would have falsifiably predicted differently.)

Comment author: Kawoomba 23 September 2012 06:26:32AM 6 points [-]

The call for such an actual prediction was based on the claim that

These predictions frequently do not overlap with what existing cognitive science would have one expect.

There is not necessarily a need for differing predictions to choose one explanation over another, which is why MWI isn't an analogous comparison. When two competing descriptions predict the exact same, you'd choose the simpler one, based on Occam's Razor.

Connection Theory, prima facie from the OP, introduces additional assumptions (additional complexity). As such, even if it reliably made the exact same predictions as previous models, it would be rejected on that additional complexity alone. Thus, CT needs to provide different and better fitting explanations than the hitherto standard, since on complexity grounds it cannot compete, unlike (purportedly) MWI.

Comment author: pjeby 22 September 2012 11:10:30PM 13 points [-]

The way I could have evidence in favor of CT would be if I had created CT charts using the CT procedure, used CT to derive predictions from the charts, and then tested the predictions. And I've done this.

See, this is an example of what I mean about the CT website equaling "not understanding 'evidence'".

What you've described is primarily evidence for "more detailed models of a specific human make more accurate and surprising predictions than using a generic model of humanity."

It is almost no evidence for CT's actual theory.

By comparison, consider The Secret and such - "law of attraction". If you follow some of their procedures, you actually stand a good chance of obtaining some of their results... but this does not actually lend any evidential weight to the idea that a "law of attraction" actually exists. Richard Wiseman's "luck" research (showing a link between self-perceived "luck" and ability to notice lucky opportunities) provides a much better theory to explain such results.

In the case of CT, other practical and theoretical models involving mapping of a person's beliefs exist. One model (which isn't even a psychological theory, mind you) is a ToC "Current Reality Tree" (CRT) diagram based solely on elementary cause-and-effect logic. You can use a CRT to map and predict the behavior of extremely complex businesses (that is pretty much what it's for) and make all sorts of useful predictions from one, and I've used them in the past with beliefs as well.

But really, a CRT is just a visualization of elementary logic, and a CT chart is shorthand for a CRT, so in effect all your "evidence" is proving is that CT's practical approach is maybe as good as elementary logic... without providing any evidence left over for the theory itself. ;-)

That being said, building good theories involving the mind is hard; building workable techniques is much easier by comparison. (Though still no picnic!) I have long ago given up on trying to do the former, and stick with the latter, using theories now only as mnemonics and intuition pumps to drive techniques. You might be better off doing the same.

Comment author: ciphergoth 22 September 2012 06:49:17PM 4 points [-]

Maybe lead with the evidence next time someone asks you about CT?

Comment author: Unnamed 22 September 2012 10:41:50PM 3 points [-]

The keywords in psychology for this distinction are nomothetic vs. idiographic (which are useful as search terms, or for talking with a small subset of people). Nomothetic approaches deal with general trends among a large number of people, and cover most psychology research (e.g. people who are high in Conscientiousness tend to have more successful careers). Idiographic approaches try to engage with a particular individual's psychology in detail. From what I've read, I'd call CT an idiographic approach to motivated reasoning and defensiveness, with promising potential applications.