Eliezer_Yudkowsky comments on A Critique of Leverage Research's Connection Theory - Less Wrong

20 Post author: peter_hurford 20 September 2012 04:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 23 September 2012 03:55:01AM 12 points [-]

These predictions frequently do not overlap with what existing cognitive science would have one expect.

What is an example of a case you've actually observed where CT made a falsifiable, bold, successful prediction? ("Falsifiable" - say what would have made the prediction fail. "Bold" - explain what a cogsci guy or random good human psychologist would have falsifiably predicted differently.)

Comment author: Geoff_Anders 23 September 2012 12:48:37PM *  6 points [-]

For at least 2 years prior to January 2009, I procrastinated between 1-3 hours a day reading random internet news sites. After I created my first CT chart, I made the following prediction: "If I design a way to gain information about the world that does not involve reading internet news sites that also does not alter my way of achieving my other intrinsic goods, then I will stop spending time reading these internet news sites." The "does not alter my way of achieving my other intrinsic goods" was unpacked. It included: "does not alter my way of gaining social acceptance", "does not alter my relationships with my family members", etc. The specifics were unpacked there as well.

This was prediction was falsifiable - it would have failed if I had kept reading internet news sites. It was also bold - cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior. And it was also successful - after implementing the recommendation in January 2009, I stopped procrastinating as predicted. Now, of course there are multiple explanations for the success of the prediction, including "CT is true" and "you just used your willpower". Nevertheless, this is an example of a faisifiable, bold, successful prediction.

Comment author: pjeby 24 September 2012 03:46:16AM 7 points [-]

cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior.

Your model of human psychologists needs updating, then. Books on hypnotism that I read when I was 11 discuss needs substitution, secondary gain, etc. that would be relevant to making such a prediction. Any good human psychologist knows to look for what gains a behavior produces.

Of course, maybe you meant "good (random human) psychologists", not "good, random (human psychologists)" - i.e., psychologists who study the behavior of random humans, rather than people who help individual humans... in which case, that's a really low bar for CT to leap over.

Also:

it would have failed if I had kept reading internet news sites.

This is also a really low bar, unless you specify how long you would stay away from them. In this case, three years is pretty good, but just getting somebody to stop for a few days or even a couple months is still a relatively low bar.

Comment author: shminux 23 September 2012 05:39:48AM -1 points [-]

Can't resist...

What is an example of a case you've actually observed where MWI made a falsifiable, bold, successful prediction? ("Falsifiable" - say what would have made the prediction fail. "Bold" - explain what a CI guy or random good human physicist would have falsifiably predicted differently.)

Comment author: Kawoomba 23 September 2012 06:26:32AM 6 points [-]

The call for such an actual prediction was based on the claim that

These predictions frequently do not overlap with what existing cognitive science would have one expect.

There is not necessarily a need for differing predictions to choose one explanation over another, which is why MWI isn't an analogous comparison. When two competing descriptions predict the exact same, you'd choose the simpler one, based on Occam's Razor.

Connection Theory, prima facie from the OP, introduces additional assumptions (additional complexity). As such, even if it reliably made the exact same predictions as previous models, it would be rejected on that additional complexity alone. Thus, CT needs to provide different and better fitting explanations than the hitherto standard, since on complexity grounds it cannot compete, unlike (purportedly) MWI.