Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?
Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."
Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.
How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.
Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.
The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.
Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.
We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.
Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.
After the terrorist attacks at Charlie Hebdo, conspiracy theories quickly arose about who was behind the attacks.
People who are critical to the west easily swallow such theories while pro-vest people just as easily find them ridiculous.
I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.
Yet very few people do so. People are guided by their previous understanding of the world, when judging new information. It sounds like a fine Bayesian approach for getting through life, but for real scientific knowledge, we can't rely on *prior* reasonings (even though these might involve Bayesian reasoning). Real science works by investigating evidence.
So, how do we characterise the human tendency to jump to conclusions that have simply been supplied by their sense of normativity. Is their a previously described bias that covers this case?
I’m a member of the Bay Area Effective Altruist movement. I wanted to make my first post here to share some concerns I have about Leverage Research.
At parties, I often hear Leverage folks claiming they've pretty much solved psychology. They assign credit to their central research project: Connection Theory.
Amazingly, Connection Theory is never something I find endorsed by even a single conventionally educated person with knowledge of psychology. Yet some of my most intelligent friends end up deciding that Connection Theory seems promising enough to be given the benefit of the doubt. They usually give black-box reasons for supporting it, like, “I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”. They do this sort of hedging as though psychology were a field that couldn’t be probed by science or understood in any level of detail. I would argue that this approach is too forgiving and charitable in situations when you can instead just analyze the theory using standard scientific reasoning. You could also assess its credibility based on standard quality markers or even the perceived quality of the work going into developing the theory.
To start, here’s some warning signs for Connection Theory:
- Invented by amateurs without knowledge of psychology
- Never published for scrutiny in any peer-reviewed venue, conference, open access journal, or even a non peer-reviewed venue of any type
- Unknown outside of the research community that created it
- Vaguely specified
- Cites no references
- Created in a vacuum from first principles
- Contains disproven cartesian assumptions about mental processes
- Unaware of the frontier of current psychology research
- Consists entirely of poorly conducted, unpublished case studies
- Unusually lax methodology... even for psychology experiments
- Data from early studies shows a "100% success rate" -- the way only a grade-schooler would forge their results
- In a 2013 talk at Leverage Research, the creator of Connection Theory refused to acknowledge the possibility that his techniques could ever fail to produce correct answers.
- In that same talk, when someone pointed out a hypothetical way that an incorrect answer could be produced by Connection Theory, the creator countered that if that case occurred, Connection Theory would still be right by relying on a redefinition of the word “true”.
- The creator of Connection Theory brags about how he intentionally targets high net worth individuals for “mind charting” sessions so he can gather information about their motivation that he later uses to solicit large amounts of money from them.
I don't know about you, but most people get off this crazy train somewhere around stop #1. And given the rest, can you really blame them? The average person who sets themselves up to consider (and possibly believe) ideas this insane, doesn't have long before they end up pumping all their money into get rich quick schemes or drinking bleach to try and improve their health
But maybe you think you’re different? Maybe you’re sufficiently epistemically advanced that you don't have to disregard theories with this many red flags. In that case, there's now an even more fundamental reason to reject Connection Theory: As Alyssa Vance points out, the supposed "advance predictions" attributed to Connection Theory (the predictions being claimed as evidence in its favor in the only publicly available manuscript about it), are just ad hoc predictions made up by the researchers themselves on a case by case basis -- with little to no input from Connection Theory itself. This kind of error is why there has been a distinct field called "Philosophy of Science" for the past 50 years. And it's why people attempting to do science need to learn a little about it before proposing theories with so little content that they can't even be wrong.
I mention all this because I find that people from outside the Bay Area or those with very little contact with Leverage often think that Connection Theory is part of a bold and noble research program that’s attacking a valuable problem with reports of steady progress and even some plausible hope of success. Instead, I would counsel newcomers to the effective altruist movement to be careful how much you trust Leverage and not to put too much faith in Connection Theory.
Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.
Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.
Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.
I came up with an idea today: I think it would be useful to have a list of everything that a typical person ought to do. After all, there is quite a lot of stuff that a typical person ought to do; how else is a person supposed to remember it all?
Here's what I've come up with so far:
- Eating well.
- Exercising regularly.
- Mitigating common risks in everyday life (e.g. wearing a seat belt while driving).
- Other everyday health stuff (e.g. not sitting down eight hours a day).
- Visiting a doctor, a dentist, and (if necessary) an optometrist on a regular basis.
- Being familiar with common health problems and what to do about them.
- Being able to recognize medical emergencies and react appropriately.
- Maintaining mental health (see also: pretty much this entire list).
- Educating oneself in career-related skills.
- Getting a job and/or becoming self-employed.
- Networking (see also: interpersonal interaction).
- Investing one's savings appropriately.
- Donating to charity.
- Volunteering for charity.
- Doing favors for friends.
- Developing and maintaining relationships with other people: friendly, romantic, family, others?
- Discussion of useful topics.
- Hobbies, finding ways to enjoy yourself. (I'm not sure how to expand on this one.)
- Any responsibilities one has signed up for.
- Developing and maintaining one's ability to get stuff done. (Kinda vague, this one.)
- Maintaining a list of what needs to be done.
- Making good decisions about what to do. In particular, which of the items on this list to focus on and how to accomplish them.
- Developing the skills one needs to carry out these tasks effectively, through education, experience, and discussion.
Anyone have any suggestions for additions or improvements?
edit 1: some suggestions by Rain and aelephant
Changing Systems is Different than Running Controlled Experiments - Don’t Choose How to Run Your Country That Way!
Trigger warning: Discussion of rape.
Say that each morning you tell yourself that you are lazy for not wanting to get out of bed to go to work, as a way to convince yourself to get up. Perhaps if the only variable you changed was to lower your level of guilt, you might not get out of bed to go to work, and would instead take the day off. So if you are running a motivation system that uses guilt, feeling guilt may well be something you do not want to get rid of. If you got rid of the guilt but stopped going to work, that would likely be a net negative for your life.
To contrast, with animal training, you reinforce behavior you want in the animal, and interrupt, redirect, or completely ignore (ie: no shaming or guilting) behavior you don't want. It's also a similar methodology that meditation uses. When you meditate, you are told to focus on a meditative object such as the breath. When your mind wanders from the meditative object, you are instructed to just return your attention to the meditative object, and to not in any way punish yourself for having wandered. Also, you are instructed to not punish yourself for punishing yourself for having your mind wander. Meditation does not use reward during the meditative process, although it's common to sound a beautiful chime which will give hedons at the end of a session, and people often perform a pleasant ritual before and/or after meditation that builds positive association with the activity of meditating. Example page of meditation instructions.
So, if you switch to a positive reinforcement motivational system, such as that which animal trainers use to train dogs, then guilt is counter-productive for motivation, because it is a form of punishment.
If you only change one variable from a motivation system that uses guilt, then it may break the system, and be a net negative. However, there is likely a way to get a net utility gain by changing several variables of the system, such as by switching to a positive reinforcement based system where you add instant rewards that increase hedons and remove guilt and other punishments.
As it stands, there are many unreported rapes in American society. This excellent article debunks many myths about rape, including the classic myth that rapes are generally done by strangers using force:
A huge proportion of the women I know enough to talk with about it have survived an attempted or completed rape. None of them was raped by a stranger who attacked them from behind a bush, hid in the back of her car or any of the other scenarios that fit the social script of stranger rape. Anyone reading this post, in fact, is likely to know that six out of seven rapes are committed by someone the victim knows.
The author goes on to explain how most rapes are from repeat offenders who by a median age of 26.5, on average rape around 5-6 women each, and that it is almost always someone who was part of the woman's social circle, and intoxicants are usually used.
The suggestion of change of system that I got from this post is actually in the title of the blog: "Yes Means Yes."
If the social rules for consent are changed from "if a woman does not say no, then it may or may not be okay" to "it is only okay if a woman says yes," then the boundary becomes a lot more clear to both parties. It would be a pretty radical system change, that would make a lot of people uncomfortable.
To be more clear - with a "Yes Means Yes" system, you don't need to have "No Means No", because sex is only had when there is a Yes. If a woman is too drunk to say or enforce no, then she is also too drunk to say yes, and sex is not had unless there is explicit consent. Having a Yes Means Yes social policy would change the onus of responsibility for making sure that sex is consensual from the woman - who is obligated to say no if she doesn't want to - to both parties who must say yes to proceed. This would not stop all rape by any means, but if implemented in a system where people were taught good communication and assertiveness, it would cut down on it. For example, instead of feeling that it was her fault because she got drunk and didn't say no aggressively enough, a woman would realize quickly, "hey, I didn't say yes!" and a predatorial guy who was one of the small percentage of men who rape women would also realize that the woman would be less likely to just feel ashamed and keep quiet and would be more likely to take action to defend herself.
Perhaps some people would be afraid that they'd remain virgins for life in this system - some men might be afraid that they'd be too shy to ever ask, some women might not feel comfortable actually admitting that they want sex. And therefore, people of both genders might be resistant to switching systems because they would imagine the switch without a complete social system switch or training. And as it stands, perhaps a lot less sex would happen at first. A system like that would require retraining a lot of society to be more assertive.
Just shifting one variable and telling men to say "I only have sex when women say yes" would be very weird. If a guy tried to implement that in the current system, some people might look at him like he was crazy or even get offended.
I think the "Yes Means Yes" system would work beautifully in a society that functioned based on a different system - where the social norm, which people were trained in, was to identify and state one's desires, and to not proceed without clarity. I do think it would cut down on rape, and unreported rape.
I've discovered that when talking to people about potential novel systems, that the most common response I get is for them to say why the alternative system won't work, based on what would happen if you changed one variable of the current system to be more like the novel system. Examples: "If I didn't feel guilty, I'd never get anything done," or "In a system where you always had to have a clear yes before having sex, people would feel really awkward and uncomfortable and opt out." (Alternatively I will often hear people justify alternative systems using similar arguments about single-variable changes.)
The examples above are a couple of the more simple examples of this general principle I've been observing quite a lot lately.
Consider how this applies to government systems, and other social systems. There are so many parts dependent on each other, that it is very hard to shift any single one without creating a domino effect of other shifts. So making any argument about how changing a single variable would fix or destroy a complex system like government is usually a huge oversimplification.
To quote Einstein:
It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.
My thoughts on making large-scale change, are that you need to be thinking large scale. If you want to be a change maker, it is best to start small in your actions, study and experiment a lot. Focus your studies on success and failure scenarios as close as possible to what it is you want to effect, while as diverse as possible from each other.
Running single-variable experiments is important - it is just that it is only how you understand a little corner of the problem to be solved - that's not how you find the solution itself to a problem involving a complex system.
To give a biological analogy: Cancer is what happens when a single type of cell tries to become the whole system. Running a single-variable controlled experiment to determine what type of complex system you want to choose is like trying to determine the optimal form of cancer, as opposed to looking at an entire entity. Life is complicated.
I want to spend a substantial fraction of my time optimizing myself in the direction of being more attractive to females, and I'd really appreciate your suggestions on how to do so.
It should be pretty self-explanatory, but in case you're wondering: relationships are a big part of personal happiness, and where I am now, I feel more inclined toward increasing the number and variability of short- or middle-term sexual relationships rather than just picking a girl who wants to be my wife and run with it. But at the moment women aren't exactly chasing me down the streets, so I want to offer them a more pleasant experience of my company than what it already is.
I sincerely think this post should provoke none of the above. I'm not asking for ways to trick women into liking me, nor about gender differences about what males prefer over females, etc. Please try really hard to avoid mind-killing subjects into your comments. I'm 'just' asking for ways to change myself into being a more sexually attractive human being.
I'm aware of the dichotomy lying around: attraction can be created vs attraction can only be amplified. In both cases there should be at least something that can be done.
I'm also aware that some people strongly dislike posts full of personal details, so I will try to keep them at minimum, while at the same time trying to provide the necessary description of my situation.
I would like
Try to aim for advice on stable improvements, about aspects that are proven to be sexually attractive to straight females, in the age range of 20 to 40.
For example, I know that height or facial symmetry are proven to result universally attractive, but I cannot really change that, and sole-lifts or make-up are so short-term solutions to border on 'tricking women' (yes, I know that women use those tricks too, I simply would like to invest my time better).
This is the shortest possible description: I'm a straight male in my thirties, heavily overweight, living in Italy in a 20k people town, with a job paying me about $20k a year.
If you think you need more details ask for them in the comments or PM me.
What I'm already doing/planning to do
The first obvious choice is getting fit, although it's about two years I'm trying different diets with no results, so I'd really need pointers in that direction. I've also heard about training programs that tells you to concentrate on shoulders, because apparently shoulder-to-waist ratio of 1.5 or more is especially attractive.
I've also been told multiple times by multiple sources that women values confidence, competence and leadership. I understand the confidence part in being able to express without embarassment your interest (but still in a socially graceful manner), but I would really like pointers about what area of my life I could engage to become more competent or a leader. In what domains women like competence/leadership?
My only hobby at the moment are the game of Go and dabbing in math/logics/AI, which, as fascinating as they are, are seldom considered very attractive.
What I'm not sure about
Is fashion important? I understand that I need to dress well for my built, but I would like to know if a Versace button down shirt is more attractive than a plain brand one.
Do you think am I doing the right thing? Or am I wrong in my search for attractiveness? Should I concentrate on something totally unrelated? Dose the physical aspect matter or I should concentrate more on character? Am I completely off track?
If you think I'm grossly mistaken, in the name of Omega let me know!
If you think this post doesn't belong in a community devoted rationality and self-improvement, feel free to downvote, but at least try to indicate a way to better phrase the problem or point me to another community I can ask the same question.
Thank you very much!
Before I make my main point, I want to acknowledge that curriculum development is hard. It's even harder when you're trying to teach the unteachable. And it's even harder when you're in the process of bootstrapping. I am aware of the Kahneman inside/outside curriculum design story. And, I myself have taught 200+ hours of my own computer science curricula to middle-school students. So this "open letter," is not some sort of criticism of CFAR's curriculum; It's a "Hey, check out this cool stuff eventually when you have time," letter. I just wanted to put all this out there, to possibly influence the next five years of CFAR.
Curriculum development is hard.
So, anyway, I don't personally know any of the people involved in CFAR, but I do know you're all great.
A case for developmental thinking
Below is an annotated bibliography of some of my personal touchstones in the development literature, books that are foundational or books that synthesize decades of research about the developmental aspects of entrepreneurial, executive, educational, and scientific thinking, as well as the developmental aspects of emotion and cognition. Note that this is personal, idiosyncratic, non-exhaustive list.
And, to qualify, I have epistemological and ontological issues with plenty of the stuff below. But some of these authors are brilliant, and the rest are smart, meticulous, and values-driven. Lots of these authors deeply care about empirically identifying, targeting, accelerating, and stabilizing skills ahead of schedule or helping skills manifest when they wouldn't have otherwise appeared at all. Quibbles and double-takes aside, there is lots of signal, here, even if it's not seated in a modern framework (which would of course increase the value and accessibility of what's below).
There are clues or even neon signs, here, for isolating fine-grained, trainable stuff to be incorporated into curricula. Even if an intervention was designed for kids, a lot of adults still won't perform consistently prior to said intervention. And these researchers have spent thousands of collective hours thinking about how to structure assessments, interventions, and validations which may be extendable to more advanced scenarios.
So all the material below is not only useful for thinking about remedial or grade-school situations, and is not just for adding more tools to a cognitive toolbox, but could be useful for radically transforming a person's thinking style at a deep level.
child:adult :: adult: ?
This has everything to do with the "Outside the Box" Box. Really. One author below has been collecting data for decades to attempt to describe individuals that may represent far less than one percent of the population.
0. Protocol analysis
Everyone knows that people are poor reporters of what goes on in their heads. But this is a straw. A tremendous amount of research has gone into understanding what conditions, tasks, types of cognitive routines, and types of cognitive objects foster reliable introspective reporting. Introspective reporting can be reliable and useful. Grandaddy Herbert Simon (who coined the term "bounded rationality") devotes an entire book to it. The preface (I think) is a great overview. I wanted to mention this, first, because lots of the researchers below use verbal reports in their work.
1. Developmental aspects of scientific thinking
Deanna Kuhn and colleagues develop and test fine-grained interventions to promote transfer of various aspects of causal inquiry and reasoning in middle school students. In her words, she wants to "[develop] students' meta-level awareness and management of their intellectual processes." Kuhn believes that inquiry and argumentation skills, carefully defined and empirically backed, should be emphasized over specific content in public education. That sounds like vague and fluffy marketing-speak, but if you drill down to the specifics of what she's doing, her work is anything but. (That goes for all of these 50,000 foot summaries. These people are awesome.)
David Klahr and colleagues emphasize how children and adults compare in coordinated searches of a hypothesis space and experiment space. He believes that scientific thinking is not different in kind than everyday thinking. Klahr gives an integrated account of all the current approaches to studying scientific thinking. Herbert Simon was Klahr's dissertation advisor.
2. Developmental aspects of executive or instrumental thinking
Ok, I'll say it: Elliot Jacques was a psychoanalyst, among other things. And the guy makes weird analogies between thinking styles and truth tables. But his methods are rigorous. He has found possible discontinuities in how adults process information in order to achieve goals and how these differences relate to an individuals "time horizon," or maximum time length over which an individual can comfortably execute a goal. Additionally, he has explored how these factors predictably change over a lifespan.
3. Developmental aspects of entrepreneurial thinking
Saras Sarasvathy and colleagues study the difference between novice entrepreneurs and expert entrepreneurs. Sarasvathy wants to know how people function under conditions of goal ambiguity ("We don't know the exact form of what we want"), environmental isotropy ("The levers to affect the world, in our concrete situation, are non-obvious"), and enaction ("When we act we change the world"). Herbert Simon was her advisor. Her thinking predates and goes beyond the lean startup movement.
"What effectuation is not" http://www.effectuation.org/sites/default/files/research_papers/not-effectuation.pdf
4. General Cognitive Development
Jane Loevinger and colleagues' work have inspired scores of studies. Loevinger discovered potentially stepwise changes in "ego level" over a lifespan. Ego level is an archaic-sounding term that might be defined as one's ontological, epistemological, and metacognitive stance towards self and world. Loevinger's methods are rigorous, with good inter-rater reliability, bayesian scoring rules incorporating base rates, and so forth.
Here is a woo-woo description of the ego levels, but note that these descriptions are based on decades of experience and have a repeatedly validated empirical core. The author of this document, Susanne Cook-Greuter, received her doctorate from Harvard by extending Loevinger's model, and it's well worth reading all the way through:
Here is a recent look at the field:
By the way, having explicit cognitive goals predicts an increase in ego level, three years later, but not an increase in subjective well-being. (Only the highest ego levels are discontinuously associated with increased wellbeing.) Socio-emotional goals do predict an increase in subjective well-being, three years later. Great study:
Bauer, Jack J., and Dan P. McAdams. "Eudaimonic growth: Narrative growth goals predict increases in ego development and subjective well-being 3 years later." Developmental Psychology 46.4 (2010): 761.
5. Bridging symbolic and non-symbolic cognition
Eugene Gendlin and colleagues developed a "[...] theory of personality change [...] which involved a fundamental shift from looking at content [to] process [...]. From examining hundreds of transcripts and hours of taped psychotherapy interviews, Gendlin and Zimring formulated the Experiencing Level variable. [...]"
The "focusing" technique was designed as a trainable intervention to influence an individual's Experiencing Level.
Marion N. Hendricks reviews 89 studies, concluding that [I quote]:
- Clients who process in a High Experiencing manner or focus do better in therapy according to client, therapist and objective outcome measures.
- Clients and therapists judge sessions in which focusing takes place as more successful.
- Successful short term therapy clients focus in every session.
- Some clients focus immediately in therapy; Others require training.
- Clients who process in a Low Experiencing manner can be taught to focus and increase in Experiencing manner, either in therapy or in a separate training.
- Therapist responses deepen or flatten client Experiencing. Therapists who focus effectively help their clients do so.
- Successful training in focusing is best maintained by those clients who are the strongest focusers during training.
http://www.amazon.com/Self-Therapy-Step-By-Step-Wholeness-Cutting-Edge-Psychotherapy/dp/0984392777/ [IFS is very similar to focusing]
http://www.amazon.com/Emotion-Focused-Therapy-Coaching-Clients-Feelings/dp/1557988811/ [more references, similar to focusing]
http://www.amazon.com/Experiencing-Creation-Meaning-Philosophical-Psychological/dp/0810114275/ [favorite book of all time, by the way]
6. Rigorous Instructional Design
Siegfried Engelmann (http://www.zigsite.com/) and colleagues are dedicated to dramatically accelerating cognitive skill acquisition in disadvantaged children. In addition to his peer-reviewed research, he specializes in unambiguously decomposing cognitive learning tasks and designing curricula. Engelmann's methods were validated as part of Project Follow Through, the "largest and most expensive experiment in education funded by the U.S. federal government that has ever been conducted," according to Wikipedia. Engelmann contends that the data show that Direct Instruction outperformed all other methods:
Here, he systematically eviscerates an example of educational material that doesn't meet his standards:
And this is his instructional design philosophy:
In conclusion, lots of scientists have cared for decades about describing the cognitive differences between children, adults, and expert or developmentally advanced adults. And lots of scientists care about making those differences happen ahead of schedule or happen when they wouldn't have otherwise happened at all. This is a valuable and complementary perspective to what seems to be CFAR's current approach. I hope CFAR will eventually consider digging into this line of thinking, though maybe they're already on top of it or up to something even better.
Link to those results: http://lesswrong.com/lw/fp5/2012_survey_results/
I've been basically lurking this site for more than a year now and it's incredible that I have actually taken anything at all on this site seriously, let alone that at least thousands of others have. I have never received evidence that I am less likely to be overconfident about things than people in general or that any other particular person on this site is.
Yet in spite of this apparently 3.7% of people answering the survey have actually signed up for cryonics which is surely greater than the percent of people in the entire world signed up for cryonics. The entire idea seems to be taken especially seriously on this site. Evidently 72.9% of people here are at least considering signing up. I think the chance of cryonics working is trivial, for all practical purposes indistinguishable from zero (the expected value of the benefit is certainly not worth several hundred thousand dollars in future value considerations). Other people here apparently disagree, but if the rest of the world is undervaluing cryonics at the moment then why do those here with privileged information not invest heavily in the formation of new for-profit cryonics organizations, or start them alone, or invest in technology which will soon develop to make the revival of cryonics patients possible? If the rest of the world is underconfident about these ideas, then these investments would surely have an enormous expected rate of return.
There is also a question asking about the relative likelihood of different existential risks, which seems to imply that any of these risks are especially worth considering. This is not really a fault of the survey itself, as I have read significant discussion on this site related to these ideas. In my judgment this reflects a grand level of overconfidence in the probabilities of any of these occurring. How many people responding to this survey have actually made significant personal preparations for survival, like a fallout shelter with food and so on which would actually be useful under most of the different scenarios listed? I generously estimate 5% have made any such preparations.
I also see mentioned in the survey and have read on this site material related to in my view meaningless counterfactuals. The questions on dust specks vs torture and Newcomb's Problem are so unlikely to ever be relevant in reality that I view discussion about them as worthless.
My judgment of this site as of now is that way too much time is spent discussing subjects of such low expected value (usually because of absurdly low expected probability of occurring) for using this site to be worthwhile. In fact I hypothesize that this discussion actually causes overconfidence related to such things happening, and at a minimum I have seen insufficient evidence for the value of using this site to continue doing so.
Various people raised concerns that growth might ruin the culture after reading my "LessWrong could grow a lot" thread. There has been some discussion about whether endless September, a phenomenon that kills online discussion groups, is a significant threat to LessWrong and what can be done. I really care about it, so I volunteered to code a solution myself for free if needed. Luke invited debate on the subject (the debate is here) and will be sent the results of this poll and asked to make a decision. It was suggested by him in an email that I wait a little while and then post my poll (meta threads are apparently annoying to some, so we let people cool off). Here it is, preceded by a Cliff's notes summary of the concerns.
Why this is worth your consideration:
- Yvain and I checked the IQ figures in the survey against other data this time, and the good news is that it's more believable that the average LessWronger is gifted. The bad news is that LessWrong's IQ average has decreased on each survey. It can be argued that it's not decreasing by a lot or we don't have enough data, but if the data is good, LessWrong's average has lost 52% of it's giftedness since March of 2009.
- Efforts to grow LessWrong could trigger an overwhelming deluge of newbies.
- LessWrong registrations have been increasing fast and it's possible that growth could outstrip acculturation capacity. (Chart here)
- The Singularity Summit appears to cause a deluge of new users that may have similar effect to the September deluges of college freshman that endless September is named after. (This chart shows a spike correlated with the 2011 summit where 921 users joined that month, which is roughly equal to the total number of active users LW tends to have in a month if you go by the surveys or Vladmir's wget.)
- A Slashdot effect could result in a tsunami of new users if a publication with lots of readers like the Wall Street Journal (they used LessWrong data in this article) decides to write an article on LessWrong.
- The sequences contain a lot of the culture and are long meaning that "TLDR" may make LessWrong vulnerable to cultural disintegration. (New users may not know how detailed LW culture is or that the sequences contain so much culture. I didn't.)
- Eliezer said in August that the site was "seriously going to hell" due to trolls.
Two Theories on How Online Cultures Die:
Overwhelming user influx.
There are too many new users to be acculturated by older members, so they form their own, larger new culture and dominate the group.
Trending toward the mean.
A group forms because people who are very different want a place to be different together. The group attracts more people that are closer to mainstream than people who are equally different because there are more mainstream people than different people. The larger group attracts people who are even less different in the original group's way for similar reasons. The original group is slowly overwhelmed by people who will never understand because they are too different.
Request for Feedback:
In addition to constructive criticism, I'd also like the following:
Your observations of a decline or increase in quality, culture or enjoyment at LessWrong, if any.
Ideas to protect the culture.
Ideas for tracking cultural erosion.
- Ways to test the ideas to protect the culture.
View more: Next