I’m a member of the Bay Area Effective Altruist movement. I wanted to make my first post here to share some concerns I have about Leverage Research.

At parties, I often hear Leverage folks claiming they've pretty much solved psychology. They assign credit to their central research project: Connection Theory.

Amazingly, Connection Theory is never something I find endorsed by even a single conventionally educated person with knowledge of psychology. Yet some of my most intelligent friends end up deciding that Connection Theory seems promising enough to be given the benefit of the doubt. They usually give black-box reasons for supporting it, like, “I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”. They do this sort of hedging as though psychology were a field that couldn’t be probed by science or understood in any level of detail. I would argue that this approach is too forgiving and charitable in situations when you can instead just analyze the theory using standard scientific reasoning. You could also assess its credibility based on standard quality markers or even the perceived quality of the work going into developing the theory.

To start, here’s some warning signs for Connection Theory:

  1. Invented by amateurs without knowledge of psychology
  2. Never published for scrutiny in any peer-reviewed venue, conference, open access journal, or even a non peer-reviewed venue of any type
  3. Unknown outside of the research community that created it
  4. Vaguely specified
  5. Cites no references
  6. Created in a vacuum from first principles
  7. Contains disproven cartesian assumptions about mental processes
  8. Unaware of the frontier of current psychology research
  9. Consists entirely of poorly conducted, unpublished case studies
  10. Unusually lax methodology... even for psychology experiments
  11. Data from early studies shows a "100% success rate" -- the way only a grade-schooler would forge their results
  12. In a 2013 talk at Leverage Research, the creator of Connection Theory refused to acknowledge the possibility that his techniques could ever fail to produce correct answers.
  13. In that same talk, when someone pointed out a hypothetical way that an incorrect answer could be produced by Connection Theory, the creator countered that if that case occurred, Connection Theory would still be right by relying on a redefinition of the word “true”.
  14. The creator of Connection Theory brags about how he intentionally targets high net worth individuals for “mind charting” sessions so he can gather information about their motivation that he later uses to solicit large amounts of money from them.

I don't know about you, but most people get off this crazy train somewhere around stop #1. And given the rest, can you really blame them? The average person who sets themselves up to consider (and possibly believe) ideas this insane, doesn't have long before they end up pumping all their money into get rich quick schemes or drinking bleach to try and improve their health

But maybe you think you’re different? Maybe you’re sufficiently epistemically advanced that you don't have to disregard theories with this many red flags. In that case, there's now an even more fundamental reason to reject Connection Theory: As Alyssa Vance points out, the supposed "advance predictions" attributed to Connection Theory (the predictions being claimed as evidence in its favor in the only publicly available manuscript about it), are just ad hoc predictions made up by the researchers themselves on a case by case basis -- with little to no input from Connection Theory itself. This kind of error is why there has been a distinct field called "Philosophy of Science" for the past 50 years. And it's why people attempting to do science need to learn a little about it before proposing theories with so little content that they can't even be wrong.

I mention all this because I find that people from outside the Bay Area or those with very little contact with Leverage often think that Connection Theory is part of a bold and noble research program that’s attacking a valuable problem with reports of steady progress and even some plausible hope of success. Instead, I would counsel newcomers to the effective altruist movement to be careful how much you trust Leverage and not to put too much faith in Connection Theory.

New to LessWrong?

New Comment
50 comments, sorted by Click to highlight new comments since: Today at 6:23 AM

This post would have been a lot better had it contained at least a thumbnail sketch of what Connection Theory is, or what its principle claims are. I reached the end of the post still rather mystified as to what you are talking about.

For those, like me, who needed more background before reading the post, I found quite a good critique of Connection Theory here.

[-]gwern10y310

I didn't realize Connection Theory was even still around (the last time I remember reading anything about Leverage Research doing anything was when they failed at polyphasic sleep, which was like a year or two ago now?). Good to have some criticism of it, I suppose.

If you want a more precise date for whatever reason, it was right at the end of the July 2013 workshop, which was July 19-23. There were a number of leverage folk who had just started the experiment there.

(the last time I remember reading anything about Leverage Research doing anything was when they failed at polyphasic sleep, which was like a year or two ago now?)

Is there a written postmortem of that episode available somewhere? I can only find the announcement that they wanted to start polyphasic sleep but not anything about the results.

[-]gwern10y190

Is there a written postmortem of that episode available somewhere?

No. People have been posting comments requesting a postmortem or at least the Zeo data for a long time now, without any public response. Hence, the inference is that (aside from Fallshaw), it was a miserable failure.

I mention all this because I find that people from outside the Bay Area or those with very little contact with Leverage often think that Connection Theory

Um, this person from outside the Bay Area with very little (i.e.no) contact with Leverage does not think about Connection Theory at all. It may be something that looms large in the circle of people you know, even those of them in the outer darkness beyond the Bay Area, but that is bound to be a small subset of LessWrong.

I see that Leverage and CT have been discussed here before, here and here, although none of the links there to their site work any more. The current Leverage site looks like something designed from the top down that hasn't bottomed out in real things yet, a shell still under development. There is no information about Connection Theory to be found there at the moment.

Perhaps someone from Leverage could write something here to say what the criticisms of the OP are criticisms of?

[-]Jiro10y170

It sounds to me like some of the criticisms of this can be extended to much of LW's unusual ideas, such as the nature of the unfriendly AI danger and its solution: invented by amateurs, not peer reviewed, unknown outside of this community, etc.

[-]Fhyve10y150

I'd say Nick Bostrom (a respected professor at Oxford) writing Superintelligence (and otherwise working on the project), this (https://twitter.com/elonmusk/status/495759307346952192), some high profile research associates and workshop attendees (Max Tegmark, John Baez, quite a number of Google engineers), give FAI much more legitimacy than connection theory.

note that it took MIRI quite a long time to get where they are now, about 7 years? in those years FAI was very hard to communicate, but the situation is better now.

I suspect a similar thing may be going on with connection theory, as most of the critics of it don't seem to know very much about it, but are quick to criticize.

[-]V_V10y150

Also cryonics, paleo/low-carbs/ketogenic diets, various self-help/life-hacking/PUA stuff, etc.

[-]Emile10y150

I mention all this because I find that people from outside the Bay Area or those with very little contact with Leverage often think that Connection Theory is part of a bold and noble research program that’s attacking a valuable problem with reports of steady progress and even some plausible hope of success.

Nope, I'm outside of the Bay Area and think it's probably a big load of bullshit (or, more charitably, a few good ideas, oversold).

This post is a political attack that doesn't add anything new or substantive to a discussion of Leverage or of Connection Theory. Much of the language used in the post is used primarily for negative connotational value, which is characteristic of political speech. Politics is hard mode. A discussion about Leverage or Connection Theory may be valuable, but this post is not a good way to start one.

I had a similar impression; I didn't see anything here that wasn't in http://lesswrong.com/lw/el9/a_critique_of_leverage_researchs_connection_theory/. I agree with that post that CT is obviously wrong. However, writing a new post to criticize it that doesn't contain anything not in the previous post sets off my bandwagon-skeptic detectors.

[-]V_V10y40

Would you consider a post criticizing homoeopathy with a similar language a "political attack"?

I would. I would think that such a post was quite silly, in the context of being posted to LessWrong. I would hope that, if there were any question about the subject, someone would simply post a list of evidence or neutral well-reasoned arguments. Homeopathy is easily enough dispatched with "there is no explanation for this that conforms to our understanding of physical laws, and there is no evidence for it, therefore it is very unlikely to be true." Bringing political speech and in-group/out-group dynamics into it is detrimental to the conversation.

I have to say that I am extremely disappointed in the response to this post. I have no stake on either side of the issue, but if this is the best we can do then I can't tell that the Sequences have had any effect at all.

[-][anonymous]10y10

I found it interesting to compare the style of the OP to the earlier "critique" post by peter_hurford in September 2012 (already mentioned in Salemicus's comment). Looking at what WilliamJames posted, it does come across as deeply impassioned but lacking in evidence - evidence that may well be available but which isn't presented.

[This comment is no longer endorsed by its author]Reply
[-]V_V10y-30

Apparently the OP was worried that this "Connection Theory" has become surprising popular in his/her social circle of "rationalists"/effective altruists.

if this is the best we can do then I can't tell that the Sequences have had any effect at all.

Well, I haven't read most of them, but some of the parts I've read contained an endorsement of cryonics, which I think is part of a common pattern around here....

Seems to me the pattern is: "I know a guy or two who read LessWrong/HPMOR, and they really strongly believe X. Therefore, X is the core belief of rationalist community".

Where X is Connection Theory today, belief that aspiring rationalists consider themselves always correct and free of biases a few days ago, and if my memory serves me well, a year or two ago there was a link in open thread to some guy abusing his girlfriend and calling his behavior rationalist because he has read a few early chapters of HPMOR (he didn't debate on LW or go to meetups, and no one of us really knew him; but he presented himself as a member of our community, and his claims seemed credible to outsiders).

This is a PR issue, and it would probably be nice to have a FAQ explaining what are and what aren't our beliefs. There would be a place for cryonics (with the explanation that on average people predict cca 15% chance it would work), but e.g. the CT wouldn't be there.

This is a PR issue, and it would probably be nice to have a FAQ explaining what are and what aren't our beliefs.

Or in other words, some sort of LWean Creed? :)

But yes, you're right: having some clearly defined core beliefs is indeed useful for delineating in-group/out-group and avoiding the non-central fallacy where things like that guy and his STDs and walking on broken glass are clearly not normative or common or related to the LW memeplex and represent his glib rationalizations. Yvain wrote a little bit about that recently: http://slatestarcodex.com/2014/07/14/ecclesiology-for-atheists/

Personally, I prefer using the LW survey for this. It doesn't address many of the higher-level issues but at least for basic object-level stuff like cryonics, it's useful for laying out clearly what LWers tend to believe at what level of confidence.

(And to point out something yet again, this is why not being allowed to add the basilisk question to the LW survey has been so harmful: it lets a very niche marginal belief be presented by outsiders like Stross as a central belief - it's suppressed by the Grand Leader as a secret, it must be important! - and we are unable to produce concrete data pointing out 'only 1% of LWers take the basilisk at all seriously' etc.)

The crazier things in scientology are also believed in by only a small fraction of the followers, yet they're a big deal in so much that this small fraction is people who run that religion.

edit: Nobody's making a claim that visitors to a scientology website believe in xenu, and it would be downright misleading to make a poll of those visitors and argue that scientologists don't believe in xenu. Fortunately for us, scientology is unlikely to allow such a poll because they don't want to self undermine.

The crazier things in scientology are also believed in by only a small fraction of the followers, yet they're a big deal in so much that this small fraction is people who run that religion.

The crazier things are only exposed to a small fraction, which undermines the point. Do the Scientologists not believe in Xenu because they've seen all the Scientology teachings and reject them, or because they've been diligent and have never heard of them? If they've never heard of Xenu, their lack of belief says little about whether the laity differs from the priesthood... In contrast, everyone's heard of and rejected the Basilisk, and it's not clear that there's any fraction of 'people who run LW' which believes in the Basilisk comparable to the fraction of 'people who run that religion' who believe in Xenu. (At this point, isn't it literally exactly one person, Eliezer?)

edit: Nobody's making a claim that visitors to a scientology website believe in xenu

Visiting a Scientology website is not like taking the LW annual poll as I've suggested, and if there were somehow a lot of random visitors to LW taking the poll, they can be easily cut out by using the questions about time on site / duration of community involvement / karma. So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.

I'm pretty sure that a poll taken of most Catholics would show that they think abortion and birth control are moral. I'm also pretty sure that a poll taken of most Catholics would show that (to the extent they've heard of the issue at all) they think the medieval Church was completely wrong in how it treated Galileo, not just factually wrong about heliocentricism.

The church itself would disagree. Is it illegitimate to crticize either the church or Catholicism on that basis?

Well, mostly everyone heard of Xenu, for some value of "heard of", so I'm not sure what's your point.

So the poll would still be very useful for demonstrating that the Basilisk is a highly non-central and peripheral topic.

Yeah. So far, though, it is so highly non central and so peripheral that you can't even add a poll question about it.

edit:

(At this point, isn't it literally exactly one person, Eliezer?)

Roko, someone claimed to have had nightmares about it... who knows if they still believe, and whoever else believes? Scientology is far older (and far bigger), there been a lot of insider leaks which is where we know the juicy stuff from.

As for how many people believe in "Basilisk", given various "hint hint there's a much more valid version out there but I won't tell it to you" type statements and repeat objections along the lines of "that's not a fair description of the Basilisk, it makes a lot more sense than you make it out to be", it's a bit slippery with regards to what we mean by Basilisk.

That guy and his STDs and walking on broken glass

Is this a reference to something that actually happened? I think I'd very much like to hear that story

It was claimed to have happened. I think the guy & his antics were described somewhere on LW in the past few months, I may've replied to it.

How about building a list of 100 statements and then making a poll and let everybody do Likert ratings? It might also tell us how different beliefs cluster together.

I'm a little surprised that this rises to the level of needing to be addressed. There are a lot of bad theories on the internet. And a lot of kooky new age ideas in California. Without saying necessarily that this is one of them (though based on red flags it seems like it might be) why does this theory need to be criticized more than any other? That's not a critique of te post by the way, just a request for more information.

[-]Emile10y150

Well, it's a bit worrying if the "main cluster" of the LessWrong/rationalist/MIRI nebula, i.e. the Bay Area rationalists is propagating crackpotty ideas, or even just as susceptible to them as the general Bay Area population. I don't know if it's actually the case though. Maybe it's more of a problem for the Effective Altruism movement (i.e. it attracts both rationalists and crackpots that share their ideas, but there's no overlap between them).

There have been numerous critiques of Connection Theory already, and I encounter people disavowing it with much more frequency than people endorsing it, in both the rationalist and EA communities. So, I don't think we have anything to worry about in that direction. I'm more worried by the zeal with which people criticize it, given that Leverage rarely seems to mention it, all of the online material about it is quite dated, and many of the people whose criticism of it I question don't seem to actually know hardly anything about it.

To be extra clear: I'm not a proponent of CT; I'm very skeptical of it. It's just distressing to me how quick the LW community is to politicize the issue.

One part that worries me is that they put on the EA Summit (and ran it quite well), and thus had a largish presence there. Anders' talk was kind of uncomfortable to watch for me.

With regards to expected value, there's a very large number of potentially very huge negative network effects from peddling a wrong theory, beyond the cost of donating for it's research.

The expected value computed by a very very well informed being may be the sum of many terms, some positive, some negative. When we didn't calculate that sum, and are thus ignorant of the sign of that sum or any other property of the result, our number for expected value, taking into account our uncertainty, is zero. Ignorance of other terms of the sum very rapidly 'kills' the expected value.

Then the thing is to note that you don't have to resort to mere passive evaluation of expected value. You can set a requirement and then people will have to pass that requirement. E.g. you can choose 'donate when X' or 'believe when X' strategy.

The most charitable take on it that I can form is a similar one to Scott's on MBTI: (http://slatestarcodex.com/2014/05/27/on-types-of-typologies/). It might not be validated by science, but it provides a description language with a high amount of granularity over something that most people don't have a good description language for. So with this interpretation, it is more of a theory in the social sciences sense, a lens at which to look at human motivation, behaviour, etc. This probably differs from, and is a much weaker claim than people at Leverage would make.

I don't know how I feel about the allegations at the end. It seems that other than connection theory, Leverage is doing good work, and having more money is generally better. I would neither endorse or criticize their use of it, but I think that since I don't want those tactics used by arbitrary people, I'd fall on the side of criticize. I would also recommend that the aforementioned creator not be so open about his ulterior motives and some other things he has mentioned in the past. All in all, Connection Theory is not what Leverage is selling it as.

Edit: I just commented on the theory side of it. The therapy side (or however they are framing the actual actions side), a therapy doesn't need its underlying theory to be correct in order to be effective. I am rather confident that actually doing the connection theory exercises will be fairly beneficial, though actually doing a lot of things coming from psychology will probably be fairly beneficial. And other than the hole in your wallet, talking to the aforementioned creator probably is too.

[-][anonymous]10y10

Invented by amateurs without knowledge of psychology

...

I don't know about you, but most people get off this crazy train somewhere around stop #1

A post on LessWrong.

[-][anonymous]10y00

Excellent refutation of LessWrong.

[This comment is no longer endorsed by its author]Reply
[-]V_V10y00

So, this "Connection Theory" looks like run-of-the-mill crackpottery. Why are people paying attention to it?

[-][anonymous]10y130

Perhaps it's because of the strong overlap with LessWrong and Leverage's description of themselves as an organisation adopting an explicitly rationalist approach: see the intro post by lukeprog from early 2012.

(in fact, LW is the only place I have ever heard of Leverage and Connection Theory)

[-]V_V10y-20

Ah, good old in-group bias...

Paying attention to what the in-group does is no bias. It makes sense to evaluate the ideas of your friends.

[-]V_V10y-40

And does it take more than five seconds to dismiss an idea like this?

Why are people paying attention to it?

Are they? I almost forgot it exists.

I don't remember much about it, by my impression was that it more or less says this:

1) People do things they believe do contribute to their goals. Of course their models can be wrong.

2) Leave a line of retreat. If someone does/believes X because according to their model it is the only/best way to a goal G they care about, before you start convicing them to abandon X, first show them there are alternative ways to reach G. Then they will not fight for X so hard.

It doesn't seem to me so crazy. Just overhyped, without the corresponding extraordinary evidence. And overestimating the strategic behavior of humans.

So, this "Connection Theory" looks like run-of-the-mill crackpottery. Why are people paying attention to it?

From the post:

“I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”

Sounds like a Pascal's memetic mugging to me.

Connection Theory Has Less Than No Evidence

Surely this means there is evidence against Connection Theory. Seems to me, a more accurate title would be "Connection Theory Has Very Little Evidence"

In a more general point, I think it seems reasonable that amateurs might be able to fix large branches of diseased fields if the field is confused but the underlying problem is simple, as might be the case with certain areas of philosophy. OTOH, while psychology is certainly rather suboptimal, the underlying problem seems to be very hard.

"less than no" appears to be what's being asserted: no evidence for, and the evidence against (heuristics, but ones in need of a proper answer) is the less-than.

poorly conducted, unpublished case studies

So, there is evidence for. Its just really poor quality evidence.

Nay. Remebember that any evidence must be weighted against competing theories. If a case supports Connection Theory but supports better Foobar Theory, then those cases are evidence against CT.

But one of those theories is the null hypothesis.

Suppose I have a coin, and am considering the following hypotheses:

H1 It is a fair coin H2 The coin comes up heads 80% of the time H3 The coin comes up head 95% of the time

Suppose I flip the coin and get heads.

*Condition A: I was originally split between H2 and H3 (prior of .50 for both).

The coin coming up heads is consistent with both hypotheses, but it's more consistent with H3, so I will raise my confidence in H3. Since probabilities must add up to 1, I have to lower my confidence in H2. (If I've calculated correctly, P(H2) is now about .46)

*Condition B: I was strongly predisposed towards H1. So, say, .70 for H1, .15 for H2 and H3.

Posterior probabilities: P(H1) = .57, P(H2) = .20, P(H3) = .23.

*Condition C: Priors P(H1) = .70 P(H2) = .25 P(H3) = .05

Posterior probabilities: P(H1) = .59, P(H2) = .33, P(H3) = .08.

So, yes, if the only two hypotheses are CT and FT, and the evidence supports FT more than CT, then it's evidence against CT. But if there are other hypotheses (and surely there are), then a case can be evidence for both CT and FT. And if CT has a higher prior than FT, then CT can end up with a higher posterior.

It appears that you may have a skewed idea of how Bayesian reasoning works. When doing a Bayesian calculation to find out the posterior for a hypothesis, one does not compare the hypothesis against a particular competing hypothesis; one compares the hypothesis against the entirety of alternative hypotheses. It's not the case that the hypothesis that best fits the data gets its probability increased and every other hypothesis has its probability decreased. In fact, optimizing simply for the hypothesis that fits the data best, without any concern for other criteria (such as hypothesis complexity), is generally recognized as a serious problem, and is known as "overfitting".

When doing a Bayesian calculation to find out the posterior for a hypothesis, one does not compare the hypothesis against a particular competing hypothesis; one compares the hypothesis against the entirety of alternative hypotheses.

Yes, but in the case of "the totality of the theories of the mind", this is impossible. You cannot possibly calculate P("poorly conducted, unpublished case studies"| not CT) to get a likelihood ratio. Plus, if you admit more than two competing hypothesis, you lose the additivitiy of log-odds and make the calculations even more complicated.
If you want any hope to get a real Bayesian analysis of CT, the only possible way is to compare it against the best possible alternative.

It appears that you may have a skewed idea of how Bayesian reasoning works.

I think you underestimated my understanding of how Bayesian reasoning works or that I underestimated the inferential distances for Bayesian calculations done in a real case.

So, yes, if the only two hypotheses are CT and FT, and the evidence supports FT more than CT, then it's evidence against CT.

Which is exactly what I said.

[+]XiXiDu10y-180