Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: moridinamael 09 September 2014 08:46:40PM 32 points [-]

I have found that the more I use my simulation of HPMOR!Quirrell for advice, the harder it is to shut him up. As with any mental discipline, thinking in particular modes wears thought-grooves into your brain's hardware, and before you know it you've performed an irreversible self-modification. Consequently, I would definitely recommend that anybody attempting to supplant their own personality (for lack of a better phrasing) with a model of some idealized reasoner try to make sure that the idealized reasoner shares your values as thoroughly as possible.

Comment author: MichaelVassar 18 September 2014 06:33:07PM 5 points [-]

Possibly valuable to talk with Robin Hanson and I for revision to HPMOR!Quirrell decision procedures from the source?

Comment author: Viliam_Bur 27 August 2014 03:24:49PM *  65 points [-]

(...continued)

The general ability of updating. At the beginning of Freud's career, the state-of-art psychotherapy was hypnosis, which was called "magnetism". Some scientists have discovered that the laws of nature are universal, and some other scientists have jumped to the seemingly obvious conclusion that analogically, all kinds of psychological forces among humans must be the same as the forces which makes magnets attract or repel each other. So Freud learned hyphosis, used it in therapy, and was enthusiastic about it. But later he noticed that it had some negative side effects (female patients frequently falling in love with their doctors, returning to their original symptoms when the love was not reciprocated), and that the positive side effects could also be achieved without hypnosis, simply by talking about the subject (assuming that some conditions were met, such as the patient actually focusing on the subject instead of focusing on their interaction with the doctor; a large part of psychoanalysis is about optimizing for these conditions). The old technique was thrown away because the new one provided better results. Not exactly the "evidence based medicine" by our current standards, but perhaps we could use as a control group all those doctors who stubbornly refused to wash their hands between doing autopsy and treating their patients, despite their patients dropping like flies. -- Later, Freud replaced his original model of unconscious, preconscious and conscious mind, and replaced it with the "id, ego, superego" model. (This is provided as an evidence of the ability to update, to discard both commonly accepted models and one's own previous models. Which we consider an important part of rationality.)

Speaking about the "id, ego, superego" model, here is the idea of a human brain not being a single agent, but composed of multiple modules, sometimes opposed to each other. Is this something worth considering for Less Wrong readers, either as a theoretical step towards reduction of consciousness, or as a practical tool for e.g. overcoming akrasia? "Ego" as the rational part of the brain, which can evaluate consequences, but often doesn't have enough power to enforce its decisions without emotional support from some other part of brain. "Id" as the emotional part which does not understand the concept of time. "Superego" as a small model of other people in our brain. Today we could probably locate the parts of the physical brain they correspond to.

"The Psychopathology of Everyday Life" is a book describing how seemingly random human errors (random movements, forgetting words, slips of the tongue) sometimes actually make sense if we perceive them as goal-oriented actions of some mental subagent. The biggest problem of the book is that it is heavy with theory, and a large part of it focuses on puns in German language... but remove all of this, don't mention the origin, and you could get a highly upvoted article on Less Wrong! (The important part would be not to give any credit to Freud, and merely present it as an evidence for some LW wisdom. Then no one will doubt your rationality.) -- On the other hand, "Civilization and Its Discontents" is a perfect book to be rewritten into a series of articles on Overcoming Bias, about a conflict between forager mentality and farmer social values.

But updating and modelling human brains, those are topics interesting for Less Wrong readers. Most people would focus on, you know, sex. Well, how exactly could we doubt the importance of sexual impulses in a society where displaying a pretty lady is advertising 101, Twilight is a popular book, and internet is full of porn? (Also, scientists accept the importance of sexual selection in evolution.) Our own society is a huge demonstration that Freud was right about the most controversial part of his theory. The only way to make him wrong about this is to create a strawman and claim that according to Freud everything was about sex, so if we find a single thing that isn't, we proved him wrong. -- But that strawman was already used in Freud's era; he actually started one of his books by disproving it. Too bad I don't remember which one. One of the case histories, probably. (It starts like: So, people keep simplifying my theories that all dreams are dogmatically about sex, so here is a simple example to correct the misunderstanding. And he describes a situation where some child wanted an ice cream, parents forbid it, and the child was unhappy and cried. That night, the child had a dream about travelling to North Pole, through mountains of snow. This, says Freud, is what resolving a suppressed desire in a dream typically looks like: The child wanted the ice cream, that's desire #1, but also the child wanted to avoid conflict with their parents, that's desire #2. How to satisfy both of them? The "mountains of show" obviously symbolize the ice cream; the child wants it, and gets it, a lot! But to avoid a conflict with parents, even in the dream, the ice cream is censored and becomes snow, so the child can plausibly deny to themselves disobeying their parents. This is Freud's model of human dreams. It's just that an adult person would probably not obsess so much about an ice cream, which they can buy if they really want it so much, but about something unavailable, such as a sexy neighbor; and also a smart adult would use more complex censorship to fool themselves.) Also, he had a whole book called "Beyond the Pleasure Principle" where he argues that some mind modules may be guided by principles other than pleasure, for example nightmares, repetition compulsion, aggression. (His explanation of this other principle is rather poor: he invents a mystical death principle opposing the pleasure principle. Anyway, it's evidence against the "everything is about sex" strawman.)

Freud was an atheist, and very public about it. He essentially described religion as a collective mental disease, in a book called "The Future of an Illusion". He used and recommended using cocaine... if he lived in the Bay Area today, and used modafinil instead, I can easily imagine him being a very popular Less Wrong member. -- But instead he lived a century ago, so he could only be one of those people spreading controversial ideas which are now considered obvious in hindsight.

lt;dr -- I strongly disagree with using Freud as a textbook example of insanity. Many of his once controversial ideas are so obvious to us now that we simply don't attribute them to him. Instead we just associate him with the few things he got wrong. And the whole meme was started by people who were even more wrong.

Comment author: MichaelVassar 09 September 2014 02:41:26PM 5 points [-]

There's an anecdote near the beginning of "introduction to psychoanalysis" where he discusses the dreams of arctic explorers, which are almost entirely about food, not about sex, for understandable reasons.

Comment author: nerzhin 01 December 2010 10:49:08PM 13 points [-]

A major point of the post is that it is possible to both say what you mean clearly and accurately and choose your words to be polite and non-confrontational.

There are two games: a communication game and a social game. You (and the analytical people in the post) only see the communication game, thinking that if you cooperate in it you must defect in the social game.

In fact, you are allowed to cooperate in both games, and receive good payoffs whether or not your opponent is in the analytical cluster.

Comment author: MichaelVassar 17 January 2014 09:54:41AM 0 points [-]

It is possible to play both, but difficult, and you can't play both at once as well as equally smart non-analytical types will play just the social game.

Comment author: patrissimo 15 December 2010 04:07:27AM 5 points [-]

If we're going to talk about the cognitive framing effects of language, as the original post did, how about your use of the word "Mundane"?

To me, it seems actively harmful to accurate thinking, happiness, and your chance of doing good in the world. The implication is characterizing most humans as a separate lower class, with the suggestion of contempt and/or disgust for those inferior beings, which has empirically led to badness (historically: genocide. in my personal experience: it has been poisonous to Objectivism and various atheist groups I've been in).

I'd like to hear some examples where framing most people as both "lesser" and "other" has led to good for the world, because all the ones I'm pullin' up are pretty awful...

Comment author: MichaelVassar 17 January 2014 09:18:43AM 0 points [-]

Two examples. Sexual selection and speciation. Nuff' said.

Comment author: Wei_Dai 08 June 2011 07:33:07PM *  5 points [-]

Not working as a kid would be expected, since you have nothing of value to offer other kids for them to put up with your social awkwardness. Might be different in the workplace (if your job is mainly to contribute a technical skill instead of a social one).

Comment author: MichaelVassar 17 January 2014 09:17:07AM 2 points [-]

Yep, but the vast majority of people in a workplace, even those nominally there to deliver technical skills, are there to deliver social skills in reality, and all of the most highly paid people are paid for social skills.
That said, your right, still worth it. Being officially a foreigner is possibly the best approach.

Comment author: MichaelVassar 12 January 2014 05:21:47PM 0 points [-]

Another reasonable concern has to do with informational flow-through lines. When novel investigation demonstrates that previous claims or perspectives were in error, do we have good ways to change the group consensus?

Comment author: Nick_Beckstead 02 December 2013 04:49:24PM *  12 points [-]

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

Comment author: MichaelVassar 10 December 2013 05:30:32PM 10 points [-]

I spent many hours explaining a sub-set of these criticisms to you in Dolores Park soon after we first met, but it strongly seemed to me that that time was wasted. I appreciate that you want to be lawful in your approach to reason, and thus to engage with disagreement, but my impression was that you do not actually engage with disagreement, you merely want to engage with disagreement, basically, I felt that you believe in your belief in rational inquiry, but that you don't actually believe in rational inquiry.

I may, of course, be wrong, and I'm not sure how people should respond in such a situation. It strongly seems to me that a) leftist movements tend to collapse in schizm, b) rightist movements tend to converge on generic xenophobic authoritarianism regardless of their associated theory. I'd rather we avoid both of those situations, but the first seems like an inevitable result of not accommodating belief in belief, while the second seems like an inevitable result of accommodating it. My instinct is that the best option is to not accommodate belief in belief and to keep a movement small enough that schizm can be avoided. The worst thing for an epistemic standard is not the person who ignores or denies it, but the person who tries to mostly follow it when doing so feels right or is convenient while not acknowledging that they aren't following it when it feels weird or inconvenient, as that leads to a community of people with such standards engaging in double-think WRT whether their standards call for weird or inconvenient behavior. OTOH, my best guess is that about 50 people is as far as you can get with my proposed approach.

Comment author: Nick_Beckstead 26 August 2013 09:25:52AM *  1 point [-]

Would be interested to know more about why you think this is "fantastically wrong" and what you think we should do instead. The question the post is trying to answer is, "In practical terms, how should we take account of the distribution of opinion and epistemic standards in the world?" I would like to hear your answer to this question. E.g., should we all just follow the standards that come naturally to us? Should certain people do this? Should we follow the standards of some more narrowly defined group of people? Or some more narrow set of standards still?

I see the specific sentence you objected to as very much a detail rather than a core feature of my proposal, so it would be surprising to me if this was the reason you thought the proposal was fantastically wrong. For what it's worth, I do think that particular sentence can be motivated by epistemology rather than conformity. It is naturally motivated by the aggregation methods I mentioned as possibilities, which I have used in other contexts for totally independent reasons. I also think it is analogous to a situation in which I have 100 algorithms returning estimates of the value of a stock and one of them says the stock is worth 100x market price and all the others say it is worth market price. I would not take straight averages here and assume the stock is worth about 2x market price, even if the algorithm giving a weird answer was generally about as good as the others.

Comment author: MichaelVassar 10 December 2013 05:10:24PM 2 points [-]

I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

Comment author: benkuhn 02 December 2013 03:28:48AM *  1 point [-]

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.)

The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren't aware of that when it happens. One way to fix this is to fix incentives, it's true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).

I'm also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn't a comparative advantage of mine :P

(Responding to the meta-point separately because yay threading.)

Comment author: MichaelVassar 04 December 2013 05:15:20PM 4 points [-]

I think that attempting effectiveness points towards a strong attractor of taking over countries.

Comment author: Vaniver 02 December 2013 01:29:06AM *  7 points [-]

Incidentally, I don't actually consider being thoughtful about social dynamics a comparative advantage. I think we need more, like, sociologists or something--people who are actually familiar with the pitfalls of being a movement.

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

What does the person who EA is easy for look like? My first guess is a person who gets warm fuzzies from rigor. But then that suggests they'll overconsume rigor and underconsume altruism.

I'm less concerned about one of the principles failing than I am that the principles won't be enough--that people won't apply them properly because of failures of epistemology.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I see now that it's not obvious from the finished product, but this was actually the prompt I started with. I removed most of the doom-mongering (of the form "these problems are so bad that they are going to sink EA as a movement") because I found it less plausible than the actual criticisms and wanted to maximize the chance that this post would be taken seriously by effective altruists.

Interesting. The critique you've written strikes me as more "nudging" than "apostasy," and while nudging is probably more effective at improving EA, keeping those concepts separate seems useful. (The rest of this comment is mostly meta-level discussion of nudging vs. apostasy, and can be ignored by anyone interested in just the object-level discussion.)

I interpreted the idea of apostasy along the lines of Avoiding Your Belief's Real Weak Points. Suppose you knew that EA being a good idea was conditional on there being a workable population ethics, and you were uncertain if a workable population ethics existed. Then you would say "well, the real weak spot of EA is population ethics, because if that fails, then the whole edifice comes crashing down." This way, everyone who isn't on board with EA because they're pessimistic about population ethics says "aha, Ben gets it," and possibly people in EA say "hm, maybe we should take the population ethics problem more seriously." This also fits Bostrom's idea- you could tell your past self "look, past Ben, you're not taking this population ethics problem seriously, and if you do, you'll realize that it's impossible and EA is wasted effort." (And maybe another EAer reads your argument and is motivated to find that workable population ethics.)

I think there's a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility because it's far easier to subconsciously nudge your estimate of plausibility than your estimate of badness-if-true. I want to say there's an article by Yvain or Kaj Sotala somewhere about "I hear criticisms of utilitarianism and think 'oh, that's just uninteresting engineering, someone else will solve that problem' but when I look at other moral theories I think 'but they don't have an answer for X!' and think that sinks their theory, even though its proponents see X as just uninteresting engineering," which seems to me a good example of what differing plausibility assumptions look like in practice. Part of the benefit of this exercise seems to be listing out all of the questions whose answers could actually kill your theory/plan/etc., and then looking at them together and saying "what is the probability that none of these answers go against my theory?"

Now, it probably is the case that the total probability is small. (This is a belief you picked because you hold it strongly and you've thought about it a long time, not one picked at random!) But the probably may be much higher than it seems at first, because you may have dismissed an unpleasant possibility without fully considering it. (It also may be that by seriously considering one of these questions, you're able to adjust EA so that the question no longer has the chance of killing EA.)

As an example, let's switch causes to cryonics. My example of cryonics apostasy is "actually, freezing dead people is probably worthless; we should put all of our effort into making it legal to freeze live people once they get a diagnosis of a terminal condition or a degenerative neurological condition" and my example of cryonics nudging is "we probably ought to have higher fees / do more advertising and outreach." The first is much more painful to hear, and that pain is both what makes it apostasy and what makes it useful to actually consider. If it's true, the sooner you know the better.

Comment author: MichaelVassar 04 December 2013 05:14:02PM 2 points [-]

I think that this is an effective list of real weak spots. If these problems can't be fixed, EA won't do much good.

View more: Next