Thank you, calcasm, for this sequence, and apologies in advance to everyone for this being a bit of a rant and likely having been said before. I fear that the very practical suggestions are going to be lost because people's brains are being overridden by a combination of:
This big danger that Less Wrong is going to turn into a cult is a phantom. It has always been a phantom. Starting a cult whose core teaching is essentially "Think for yourself, schmuck!" together with techniques for doing so effectively may be the worst idea for a cult in world history.
If there is a cult here, not that I think there is, it is the cult of pure reason that thinks it is a sin to use any technique that could possibly reinforce a false belief or a behavior we might dislike, the cult of people crying out "...
not implementing Projects, people will improve their Rationality skills at a far slower pace. [4] You will thus run afoul of Bhagwat’s Law of Commitment: “The degree to which people identify with your group is directly proportional to the amount of stuff you tell them to do that works."
This seems to equate "improving Rationality skills" with "identifying with the group". I find this frightening, and a step towards just using "rationality" as something in the name of which to grub power, influence and followers and as a flag to rally a generic community around. Maybe that's the function of religious teachings for religious communities, but I hope not for LW.
I think we should be a little careful of using the word "cult" as a mental stop sign, since that does seem to be what's happening here. We need to be a bit more careful about labeling something with all the bad connotations of a cult just because it has some of the properties of a cult -- especially if it only seems to have the good properties. But... that doesn't mean that this good cult property won't lead to the bad cult property or properties that we don't want. You should just be more explicit as to what and how, because I'm wavering back and forth on this article being a really, really good idea (the benefits of this plan are obvious!), and a really, really scary and bad idea (if I do it, it'll make me become part of a groupthinky monster!).
The problem I have is that both sides in my own head seem to be influenced by their own clear cognitive biases -- we have the cult attractor on one hand and the accidental negative connotations and stopsigny nature of the word "cult" on the other. So if you could semi-explicitly show why adopting the idea this article puts forth would lead to some specific serious negative consequences, that would clear up my own indecision and confusion.
I don't really know, but I'll note that Scientologists are known to laud "sanity", and Objectivists were all about "reason".
You might think that a belief system which praised "reason" and "rationality" and "individualism" would have gained some kind of special immunity, somehow...?
Well, it didn't.
It worked around as well as putting a sign saying "Cold" on a refrigerator that wasn't plugged in.
Rationality flags don't seem to help that much.
(I'm not sure where to ask this, so I'll just put it here.)
Do you have any experience with doing this kind of thing online-only? I currently don't have any rational community around and I'm not even sure if I want one, but the benefits seem tremendous and I'm interested in at least trying.
Finally, a really 'low-cost' way to make a project and follow up. Right before the conclusion of a Less Wrong group, give everyone a slip of paper and ask them to write down one thing they are going to do differently next week as a result of the discussion. For two minutes (total) at the beginning of the next meeting, let people tell what they did.
This is a really good idea. I've enjoyed your series of posts and I think you have a lot of really good ideas.
Reflect communally at your next LW meeting
Share examples communally at your next LW meeting.
For two minutes (total) at the beginning of the next meeting, let people tell what they did.
At first this reminded me of one of the more obnoxious LDS commitment techniques: encouraging everyone to "bear [sic] their testimony". In short, whenever there is a gathering of Mormons, they're periodically (in the context of a monthly meeting) pressured to ALL make some public commitment to belief in the church or value of the community. This pressure is expli...
This is an excellent plan. Excellent writing, organization, thought. This is a rally-point for implementation.
It makes me uneasy when I see competent missionaries. I don't know if I have the energy to compete against them.
The idea of brevity, giving weekly assignments, and discussing them at the next meeting makes me think of "Agile software development" practices in general. The goals of rational self-improvement and agile software development seem to align fairly neatly, too.
It has the added advantage that it scales very well: You can use these techniques to manage a group, or just yourself. The ability to "go it solo" if needed seems important to this crowd.
I'm going to set a goal of having a post on this by May 22nd, to try and motivate myself to think about it more and see if I can't come up with some applied thoughts :)
I really appreciate your advice about doing things. I like doing things almost as much as I like not doing things. Doing is important and we as a community should do more things. But......... ideas! It turns out that one of the weird things about this universe is that ideas might actually be more important than actions. That the very fate of the light-cone -- whether or not orthodox Mormons get their 1+ planets (and whether or not it is "simulated" or "real")-- depends on what may turn out to be slight differences in the things humanity chooses to do, which might boil down to surprisingly small insights from individuals.
Ideas.
Good advice, I'm actually looking to start some similar projects. As you said feedback is very important, but for some of us it's difficult to find rationalists in our area to share these experiments. I would like to see some sort of online group where we can share and discuss practical ideas, or get advices from time to time. A forum would probably be enough, and I can create one if there's enough interest.
Today I will present a coherent and cogent case for Eliezer being a crook and a con-artist. This is not for the purpose of defaming him but to show that he is wasting your money and your time. I realize that SIAI has been evaluated by an ignoramus already I am merely filling in the gaps.
I will present facts and the proper citations in text. Let's begin:
NOTE: all sources are direct quotes from Eliezer's mouth either video or text.
Facts Eliezer (here after referred to as DMF) claims of himself: IQ: 143 (no mention of the test administered if it was Catell then the score can be properly converted to 126) Highest Percentile Score: 9.9998 (no mention of the test that he saw the score on) DMF learned calculus at age 13. Source: http://www.youtube.com/watch?v=9eWvZLYcous
Math Ability: "I was a spoiled math prodigy as a child..."
"[Marcello math work] ...That’s not right" and maybe half the time it will actually be wrong. And when I’m feeling inadequate I remind myself that having mysteriously good taste in final results is an empirically verifiable talent, at least when it comes to math."
Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
Standard Workday: When writing: 2-3 hours writing then a couple hours off When doing FAI work: 2-3 hours work then break then 2-3 hours with a day off before repeating (During down time math may be studied, did not sound like that happened very much.) Blogging: 1 post per day sometimes 2 posts they do not seem to exceed 12 pages from what I have seen. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Admission by DMF: DMF admits to a weakness of will. Source: http://www.youtube.com/user/michaelgrahamrichard#p/u/26/9kI1IxOrJAg
Publications Officially Listed: "In 2001, he published the first technical analysis of motivationally stable goal systems, with his book-length Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In 2002, he wrote "Levels of Organization in General Intelligence," a paper on the evolutionary psychology of human general intelligence, published in the edited volume Artificial General Intelligence (Springer, 2006). He has two papers in the edited volume Global Catastrophic Risks (Oxford, 2008), "Cognitive Biases Potentially Affecting Judgment of Global Risks" and "AI as a Positive and Negative Factor in Global Risk." Source: http://singinst.org/aboutus/team
Claims About the FAI Problem: "My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof." Source: http://johncarlosbaez.wordpress.com/2011/03/07/this-weeks-finds-week-311/
AI Related Projects Started: Flare Source: http://flarelang.sourceforge.net/ Abandoned Flare: JB, ditched Flare years ago. (2008) Source: http://lesswrong.com/lw/tf/dreams_of_ai_design/msj A legacy of pre-2003 Eliezer, of no particular importance one way or another. Source: http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/121t
DMF Discounted LOGI: "LOGI's out the window, of course, as anyone who's read the arc of LW could very easily guess." Source: http://lesswrong.com/lw/1hn/call_for_new_siai_visiting_fellows_on_a_rolling/1av0
Stated Job Description and Plan: "Eliezer Yudkowsky: My job title is Research Fellow, but I often end up doing things other than research. Right now I’m working on a book on human rationality (current pace is around 10,000-13,000 words/week for a very rough first draft, I’m around 150,000 words in and halfway done with the rough draft if I’m lucky). When that’s done I should probably block out a year to study math and then go back to Artificial Intelligence theory, hopefully ever after (until the AI theory is done, then solid AI development until the AI is finished, et cetera)." Source: http://hplusmagazine.com/2010/07/21/simplified-humanism-positive-futurism-how-prevent-universe-being-turned-paper-clips/
How Is He a Crook? DMF claims that he mastered calculus at 13 and is a math prodigy what evidence is there for this claim? Papers: The only paper with any degree of math albeit simple math is "An Intuitive Explanation of Bayes' Theorem" Source: http://yudkowsky.net/rational/bayes What about his quantum physics posts? Source: http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ Never once does DMF solve the wave equation nor does DMF solve a single derivative or integral equation the following list are most of posts with any math in them: http://lesswrong.com/lw/pe/joint_configurations/ http://lesswrong.com/lw/q0/entangled_photons/ http://lesswrong.com/lw/q2/spooky_action_at_a_distance_the_nocommunication/ http://lesswrong.com/lw/q4/decoherence_is_falsifiable_and_testable/ The other posts contain amusing graphs many hand-drawn and pseudo math: http://lesswrong.com/lw/pl/no_individual_particles/ http://lesswrong.com/lw/pk/feynman_paths/ http://lesswrong.com/lw/pj/the_quantum_arena/ http://lesswrong.com/lw/pi/classical_configuration_spaces/ http://lesswrong.com/lw/pp/decoherence/ http://lesswrong.com/lw/pq/the_socalled_heisenberg_uncertainty_principle/ (amusing pseudo math) http://lesswrong.com/lw/pu/on_being_decoherent/ http://lesswrong.com/lw/pz/decoherence_as_projection/ If DMF mastered calculus at 13 then why is there no evidence in any of these posts? If DMF is a math prodigy; who is good at explaining math; why no explanation of the wave equation? He does mention it in his timeless physics post but it appears that he took his description from wikipedia, since there are some striking similarities. It is one thing to talk with math jargon such a derivatives and gradients its another thing entirely to be able to actually use those ideas so solve an equation or model a system. DMF has shown no evidence that he can do such things.
This critique is so poor that I think there's a nonzero chance that you're a plant.
Related to: Lessons from Latter-day Saints, Building Rationalist Communities overview, Holy Books (Or Rationalist Sequences) Don't Implement Themselves
My thesis:
Intelligent discussion of ideas is always refreshing. But translating that into action is more difficult.
Our learned reflexes are deep. They need to be overridden. How? Practice.
One woman I taught in India, we’ll call her Girija, was 35 years old, extremely intelligent and really wanted to change her life but had incredibly low levels of self-confidence. Every time we met Girija, we’d have a really sharp discussion, followed by her pouring her heart out to us. It was the same every time, and though we enjoyed the visits, and the food she would feed us, she never seemed to be getting anywhere.
If she really wanted to fundamentally change her life, our weekly meetings weren’t enough. (Similarly, weekly meetups are a good start, but if you really want to be learning rationality you should be practicing every day.)
We felt that if Girija spent some time every day with her 9 year old daughter and live-in boyfriend, reading the scriptures together, they would be happier. We explained this to her frequently, and she said she would start -- but she never did it.
One week, through cleverly calling Girija and chatting for 10 minutes every day, we got her to do it. After the week was over, we asked her how it went.
“You know, it was really good,” she said. “Sandeep and I have been getting along a lot better this week because we did that.”
It was like a light had turned on in her head. Because we followed up, she did it, and was far more motivated to do more things afterwards.[1]
Let me give two simple examples of goal, project, and follow-up.[2]
I came up with these in about five minutes. Having spent more time in the community than me, you will all be able to generate more and better possibilities.
Some points about Projects:
Finally, a really 'low-cost' way to make a project and follow up. Right before the conclusion of a Less Wrong group, give everyone a slip of paper and ask them to write down one thing they are going to do differently next week as a result of the discussion. For two minutes (total) at the beginning of the next meeting, let people tell what they did.
Some notes and warnings:
Doing this in a fraternalistic manner, not a paternalistic manner, will be a key to success.[3] Community agreement that We Should Do This is important before launching a Project.
Beware of the following tradeoff:
I will discuss this more later, along with possible solutions. Latter-day Saints, with a large emphasis on doing things, have high levels of commitment; however, there are definitely people who would come to church more if they were expected to do less.
Please post any ideas you have for Projects in the comments.
[1] Even subtracting the religious element, common goals reduce conflict.
[2] Here are some keys to following up that I learned. In two years, I probably applied this on about 600 people:
[3] Coming from my experience as a Latter-day Saint missionary, my personal examples are all fairly paternalistic. With tweaks, they can all be made fraternalistic. The sentiment has been expressed that “I don’t like people telling me what to do”; this will avoid that pitfall.
[4] I say 'far slower' based on my missionary experience. When people were dedicated to specific projects, they seemed to improve a lot faster.