Connection Theory Has Less Than No Evidence

17 WilliamJames 01 August 2014 10:17AM

I’m a member of the Bay Area Effective Altruist movement. I wanted to make my first post here to share some concerns I have about Leverage Research.

At parties, I often hear Leverage folks claiming they've pretty much solved psychology. They assign credit to their central research project: Connection Theory.

Amazingly, Connection Theory is never something I find endorsed by even a single conventionally educated person with knowledge of psychology. Yet some of my most intelligent friends end up deciding that Connection Theory seems promising enough to be given the benefit of the doubt. They usually give black-box reasons for supporting it, like, “I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”. They do this sort of hedging as though psychology were a field that couldn’t be probed by science or understood in any level of detail. I would argue that this approach is too forgiving and charitable in situations when you can instead just analyze the theory using standard scientific reasoning. You could also assess its credibility based on standard quality markers or even the perceived quality of the work going into developing the theory.

To start, here’s some warning signs for Connection Theory:

  1. Invented by amateurs without knowledge of psychology
  2. Never published for scrutiny in any peer-reviewed venue, conference, open access journal, or even a non peer-reviewed venue of any type
  3. Unknown outside of the research community that created it
  4. Vaguely specified
  5. Cites no references
  6. Created in a vacuum from first principles
  7. Contains disproven cartesian assumptions about mental processes
  8. Unaware of the frontier of current psychology research
  9. Consists entirely of poorly conducted, unpublished case studies
  10. Unusually lax methodology... even for psychology experiments
  11. Data from early studies shows a "100% success rate" -- the way only a grade-schooler would forge their results
  12. In a 2013 talk at Leverage Research, the creator of Connection Theory refused to acknowledge the possibility that his techniques could ever fail to produce correct answers.
  13. In that same talk, when someone pointed out a hypothetical way that an incorrect answer could be produced by Connection Theory, the creator countered that if that case occurred, Connection Theory would still be right by relying on a redefinition of the word “true”.
  14. The creator of Connection Theory brags about how he intentionally targets high net worth individuals for “mind charting” sessions so he can gather information about their motivation that he later uses to solicit large amounts of money from them.

I don't know about you, but most people get off this crazy train somewhere around stop #1. And given the rest, can you really blame them? The average person who sets themselves up to consider (and possibly believe) ideas this insane, doesn't have long before they end up pumping all their money into get rich quick schemes or drinking bleach to try and improve their health

But maybe you think you’re different? Maybe you’re sufficiently epistemically advanced that you don't have to disregard theories with this many red flags. In that case, there's now an even more fundamental reason to reject Connection Theory: As Alyssa Vance points out, the supposed "advance predictions" attributed to Connection Theory (the predictions being claimed as evidence in its favor in the only publicly available manuscript about it), are just ad hoc predictions made up by the researchers themselves on a case by case basis -- with little to no input from Connection Theory itself. This kind of error is why there has been a distinct field called "Philosophy of Science" for the past 50 years. And it's why people attempting to do science need to learn a little about it before proposing theories with so little content that they can't even be wrong.

I mention all this because I find that people from outside the Bay Area or those with very little contact with Leverage often think that Connection Theory is part of a bold and noble research program that’s attacking a valuable problem with reports of steady progress and even some plausible hope of success. Instead, I would counsel newcomers to the effective altruist movement to be careful how much you trust Leverage and not to put too much faith in Connection Theory.

How To Lose 100 Karma In 6 Hours -- What Just Happened

-31 waitingforgodel 10 December 2010 08:27AM
As with all good posts, we begin with a hypothetical:
Imagine that, in the country you are in, a law is passed saying that if you drive your car without your seat belt on, you will be fined $100.
Here's the question: Is this blackmail? Is this terrorism?
Certainly it's a zero-sum interaction (at least in the short term). You either have to endure the inconvenience of putting on a seat belt, or risk the chance of a $100 fine.
You may also want to consider that cooperating with the seat belt fine may also cause lawmakers to believe that you'll also follow future laws.

If that one seems too obvious, here's another: A law is passed establishing a $500 fine for pirating an album on the internet.
Does this count as blackmail? does this count as terrorism?

What if, instead of passing a law, the music companies declare that they will sue you for $500 every time you pirate an album?
Is it blackmail yet? terrorism? Will complying teach the music companies that throwing their weight around works?

Enough with the hypothetical, this one's real: The moderator of one of your favorite online forums declares that if you post things he feels are dangerous to read, he will censor them. He may or may not tell you when he does this. If you post such things repeatedly, you will be banned.
Does this count as blackmail? Does this count as terrorism? Should we not comply with him to prevent similar future abuses of power?

Two months ago, I found a third option to the comply/revolt dilemma: turn the force back on the forceful.
Imagine this: you're the moderator of an online forum and care primarily about one thing: reducing existential risks. One day, one of your form members vows to ensure that censoring posts will cause a small increase in existential risks.
Does this count as blackmail? Does this count as terrorism? Would you not comply to prevent similar future abuses of power?


(Please pause here if you're feeling emotional -- what follows is important, and deserves a cool head)


It is my opinion that none of these are blackmail.
Blackmail is fundamentally a single shot game.
Laws and rules, are about the structure of the world's payoffs, and changing them to incentivize behavior.
Now it's fair to say that there are just laws, and there are unjust laws... and perhaps we should refuse to follow unjust laws... but to call a law blackmail or terrorism seems incorrect.

Here's what happened:
  • 7 weeks ago, I precommitted that censoring a post or comment on LessWrong would cause a 0.0001% increase in existential risk.
  • Earlier today, Yudkowsky censored a post on less wrong
  • 20 minutes later, existential risks increased 0.0001% (to the best of my estimation).

This will continue for the foreseeable future. I'm not happy about it either. Basically I think the sanest way to think about the situation is to assume that Yudkowsky's "delete" link also causes a 0.0001% increase in existential risk, and hope that he uses it appropriately.
He doesn't feel this way. He feels that the only correct answer here is to ignore the 0.0001% increase. We are at an impasse.

FAQ:
Q: Will you reconsider?
A: Sadly no. This situation is symmetric -- just as I am not immune to Yudkowsky's laws (censorship on LW if I talk about "dangerous" ideas), he is not immune to my laws.

Q: How can you be sure that a post was censored rather than deleted by the owner?
A: This is sometimes hard, and sometimes easy. In general I will err on the side of caution.

Q: How can you be sure that you haven't missed a deleted comment?
A: I use, and am improving, an automated solution.

Q: What is the nature of the existential risk increase?
A: Emails. (Yes, emails). Maybe some phone calls.
There is a simple law that I believe makes intuitive sense to the conservative right. A law that will be easy for them to endorse. This law would be disastrous for the relative chance of our first AI being a FAI vs a UFAI. Every time EY decides to take a 0.0001% step, an email or phone call will be made to raise awareness about this law.

Q: Is there any way for me to gain access to the censored content?
A: I am working on a website that will update in real time as posts are deleted from LessWrong. Stay tuned!

Q: Will you still post here under waitingforgodel
A: Yes, but less. Replying to 100+ comments is very time consuming, and I have several projects in dire need of attention.

Thank you very much for your time and understanding,
-wfg

Edit: This post is describing what happened, not why. For a discussion about why I feel that the precommitment will result in an existential risk savings, please see the "precommitment" thread, where it is talked about extensively.

Morality is as real as the physical world.

-10 draq 27 October 2010 08:55PM

The following is destilled from the comment section of an earlier post.

Definitions

absolute and universal: Something that applies to everything and every mind.

morality (moral world): A logically consistent system of normative theories.

reality (natural world): A logically consistent system of scientific (natural) theories.

normative theory: (Almost) any English sentence in imperative or including the word "should", "must", "to be allowed to" as the verb or equivalent construction, in contrast to descriptive theories.

mind: A mind is an intelligence that has values, desires and dislikes.

moral perception: Analogous to the sensory perceptions, a moral perception is the feeling of right and wrong.

Assumptions

A normative sentence arises as a result of the mind processing its values, desires and dislikes.

Ideas exist independently from the mind. Numbers don't stop to exist just because HAL dies.

Statement

In our everyday life, we don't question the reality, due to our sensory perception. We have moral perception as much as we have a sensory perception, therefore why should we question morality?

If you believe that the natural world is absolute and universal, then there is -- I currently think -- no good reason to doubt the existence of an absolute and universal moral world.

A text diagram for illustration


-----------------------------

|    sensory perception     |    -----------------------    ------------

|          +                | -- | scientific theories | -- | reality  |

| intersubjective consensus |    -----------------------    ------------

-----------------------------

 

Analogously, 

-----------------------------

|     moral perception      |    -----------------------    ------------

|           +               | -- |   moral theories    | -- | morality |

| intersubjective consensus |    -----------------------    ------------

-----------------------------

Absolute moralily

The absolute moral world, I am talking about, does encompass everything, including AI and alien intelligence. It does not mean that alien intelligence will behave similarly to us. Different moral problems require different solutions, as much as different objects behave differently according to the same physical theories. Objects in vacuum behave differently than in the atmosphere. Water behaves differently than ice, but they are all governed by the same physics, so I assume.

An Edo-ero samurai and a Wall Street banker may behave perfectly moral even if they act differently to the same problem due to the social environment. Maybe it is perfectly moral for AIs to kill and annihilate all humans, as much as it is perfectly possible that 218 of Russell's teapots are revolving around Gliese 581 g.

The intersubjective consensus

There are different sets of theories regarding the natural world: the biblical view, the theories underlying TCM, the theories underlying homeopathy, the theories underlying chiropractise and the scientific view. Many of them contradict each other. The scientific view is well-established because there is an intersubjective consensus on the usefulness of the methodology.

The methods used in moral discussions are by far not so rigidly defined as in science; it's called civil discourse. The arguments must be logical consistent and the outcomes and conclusions of the normative theory must face the empirical challenge, i.e. if you can derive from your normative theories that it is permissible to kill innocent children without any benefits, then there is probably something wrong.

Using this method, we have done quite a lot so far. We have established the UN Human Rights Charta, we have an elaborated system of international law, law itself being a manifestation of morality (denying the fact, that law is based on morality is like saying that technology isn't based on science).

Not everyone might agree and some say, "I think that chattel slavery is perfectly moral." And there are people who think that praying to an almighty pasta monster and dressing up as pirates will cure all the ills of the world. Does that mean that there is no absolute reality? Maybe.

Conclusion

As long as we have values, desires, dislikes and make judgements (which all of us do and which maybe is a defining characteristic of the human being beyond the biological basics), if we want to put these values into a logical consistent system, and if we believe that other minds with moral perception exist, then we have an absolute moral world.

So if we stop having any desires and stop making any judgements, that is if we lack any moral perception, then we may still believe in morality, as much as an agnostic won't deny the existence of God, but it would be totally irrelevant to us.

To the same degree, if someone lacks all the sensory perception, then the natural world becomes totally irrelevant to him or her.

Transparency and Accountability

16 multifoliaterose 21 August 2010 01:01PM

[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt

Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes

...it took me a long time to decide to write this book. I personally dislike conflict and confrontation [...] I kept hoping someone in the center of string-theory research would write an objective and detailed critique of exactly what has and has not been acheived by the theory. That hasn't happened.

My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.

Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.  

continue reading »