Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Doublethink (Choosing to be Biased)

29 Post author: Eliezer_Yudkowsky 14 September 2007 08:05PM

An oblong slip of newspaper had appeared between O'Brien's fingers. For perhaps five seconds it was within the angle of Winston's vision. It was a photograph, and there was no question of its identity. It was the photograph. It was another copy of the photograph of Jones, Aaronson, and Rutherford at the party function in New York, which he had chanced upon eleven years ago and promptly destroyed. For only an instant it was before his eyes, then it was out of sight again. But he had seen it, unquestionably he had seen it! He made a desperate, agonizing effort to wrench the top half of his body free. It was impossible to move so much as a centimetre in any direction. For the moment he had even forgotten the dial. All he wanted was to hold the photograph in his fingers again, or at least to see it.

'It exists!' he cried.

'No,' said O'Brien.

He stepped across the room.

There was a memory hole in the opposite wall. O'Brien lifted the grating. Unseen, the frail slip of paper was whirling away on the current of warm air; it was vanishing in a flash of flame. O'Brien turned away from the wall.

'Ashes,' he said. 'Not even identifiable ashes. Dust. It does not exist. It never existed.'

'But it did exist! It does exist! It exists in memory. I remember it. You remember it.'

'I do not remember it,' said O'Brien.

Winston's heart sank. That was doublethink. He had a feeling of deadly helplessness. If he could have been certain that O'Brien was lying, it would not have seemed to matter. But it was perfectly possible that O'Brien had really forgotten the photograph. And if so, then already he would have forgotten his denial of remembering it, and forgotten the act of forgetting. How could one be sure that it was simple trickery? Perhaps that lunatic dislocation in the mind could really happen: that was the thought that defeated him.

   —George Orwell, 1984

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

Second-order rationality implies that at some point, you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy."  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You're welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

You can't know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear.  You won't have to put up with the inconvenience of a seatbelt.  You will be happily unconcerned for a day, a week, a year.  Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb.  Or paralyzed from the neck down.  Or dead.  It's not inevitable, but it's possible; how probable is it?  You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in.  You can't make that tradeoff rationally unless you know about biases like neglect of probability.

No matter how many days go by in blissful ignorance, it only takes a single mistake to undo a human life, to outweigh every penny you picked up from the railroad tracks of stupidity.

One of chief pieces of advice I give to aspiring rationalists is "Don't try to be clever." And, "Listen to those quiet, nagging doubts."  If you don't know, you don't know what you don't know, you don't know how much you don't know, and you don't know how much you needed to know.

There is no second-order rationality.  There is only a blind leap into what may or may not be a flaming lava pit.  Once you know, it will be too late for blindness.

But people neglect this, because they do not know what they do not know.  Unknown unknowns are not available. They do not focus on the blank area on the map, but treat it as if it corresponded to a blank territory.  When they consider leaping blindly, they check their memory for dangers, and find no flaming lava pits in the blank map.  Why not leap?

Been there.  Tried that.  Got burned.  Don't try to be clever.

I once said to a friend that I suspected the happiness of stupidity was greatly overrated.  And she shook her head seriously, and said, "No, it's not; it's really not."

Maybe there are stupid happy people out there.  Maybe they are happier than you are.  And life isn't fair, and you won't become happier by being jealous of what you can't have.  I suspect the vast majority of Overcoming Bias readers could not achieve the "happiness of stupidity" if they tried.  That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see. 

The happiness of stupidity is closed to you.  You will never have it short of actual brain damage, and maybe not even then.  You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not.  That way is closed to you, if it was ever open.

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.  I think it may prove greater, in the end. There are bounded paths and open-ended paths; plateaus on which to laze, and mountains to climb; and if climbing takes more effort, still the mountain rises higher in the end.

Also there is more to life than happiness; and other happinesses than your own may be at stake in your decisions.

But that is moot.  By the time you realize you have a choice, there is no choice.  You cannot unsee what you see.  The other way is closed.

 

Part of the Against Doublethink subsequence of How To Actually Change Your Mind

Next post: "No, Really, I've Deceived Myself"

Previous post: "Singlethink"

Comments (156)

Sort By: Old
Comment author: Eliezer_Yudkowsky 14 September 2007 08:12:16PM 12 points [-]

PS: See also Scott Aaronson's classic On Self-Delusion and Bounded Rationality.

Comment author: Benvolio 13 July 2010 06:40:20AM 11 points [-]

I am not an island. There are a few good ways to set up a life of bounded bias or a rational decision about whether or not to engage in bias. I am a social creature and as such am acutely aware that most of my decisions are made as a mix of peer pressure, groupthink, discussions with friends, unconscious reasoning and whatever media I may have managed to digest in the past few hours. I have several friends, one of whom is a dedicated rationalist but a genuinely kind person, his name is Steve I have given him these instructions..::please give me unsolicited advice and interrupt me if you see me doing something stupid or immoral but only if you think I could emotionally cope with the reasons why my action was immoral:: I have another friend he's something of a spiritualist and currently some form of wiccan something or other. His name is Dave, also a kind person and he has explicit instructions. ::Please give me unsolicited advice and help me out if I seem to be unhappy Give me the course of action you think would make me happiest so long as it doesn't conflict with what Steve has told me to do. When I have to get a good think on about something I call steve and dave separately, then call them both together, and compare the three suggestions. What is interesting is I have done this often enough that I can often predict what each will say in a sort of mental role taking that is much easier if you imagine it not being you that has such thoughts. As such I have achieved some bounded bias, that is I am biogted enough to not be a social pariah in America (one must be somewhat prejudiced against someone to survive socially even if its only prejudiced against bigots and republicans) But rational enough not to fall for gambler's fallacies and at least bright enough to nod along when a modus ponens is explained to me using small words for the fourteenth time. Its not perfect, but its mine, Most people outsource their morality anyway., from what would jesus do, to local faith leaders to calling their parents for advice. I'm just a little more structured and deliberate. Through this system I can have someone have an unbiased view and speak to someone with a biased view and make a decision as to which is a better view to have without having to unsee everything. Yes I realize steve won't be perfectly unbiased every time or perfectly rational or make the right choices but then again, neither would I and there's nothing special about me making my mistakes.

Comment author: MugaSofer 10 January 2013 11:26:30AM 1 point [-]

Yes I realize steve won't be perfectly unbiased every time or perfectly rational or make the right choices but then again, neither would I and there's nothing special about me making my mistakes.

A good principal in general. If more people realized this, the world would be a better place, I should think.

Hmm, I wonder if there's some snappy Wise Saying -esque way of formulating this

Comment author: SeanMCoincon 31 July 2014 07:27:33PM -1 points [-]

"I know I can never be perfect, but that's certainly not going to stop me from trying." --Sean Coincon

:D

Comment author: jeremysalwen 20 August 2012 10:45:03PM *  7 points [-]

Perhaps I am just contrarian in nature, but I took issue with several parts of her reasoning.

"What you're saying is tantamount to saying that you want to fuck me. So why shouldn't I react with revulsion precisely as though you'd said the latter?"

The real question is why should she react with revulsion if he said he wanted to fuck her? The revulsion is a response to the tone of the message, not to the implications one can draw from it. After all, she can conclude with >75% certainty that any male wants to fuck her. Why doesn't she show revulsion simply upon discovering that someone is male? Or even upon finding out that the world population is larger than previously thought, because that implies that there are more men who want to fuck her? Clearly she is smart enough to have resolved this paradox on her own, and posing it to him in this situation is simply being verbally aggressive.

"For my face is merely a reflection of my intellect. I can no more leave fingernails unchewed when I contemplate the nature of rationality than grin convincingly when miserable."

She seems to be claiming that her confrontational behavior and unsocial values are inseparable from rationality. Perhaps this is only so clearly false to me because I frequent lesswrong.

"If it was electromagnetism, then even the slightest instability would cause the middle sections to fly out and plummet to the ground... By the end of class, it wasn't only sapphire donut-holes that had broken loose in my mind and fallen into a new equilibrium. I never was bat-mitzvahed."

This seems to show an incredible lack of creativity (or dare I say it, intelligence), that she would be unable to come up with a plausible way in which an engineer (never mind a supernatural deity) could fix a piece of rock to appear to be floating in the hole in a secure way. It's also incredible that she would not catch onto the whole paradox of omnipotence long before this, a paradox with a lot more substance.

"he eventual outcome would most likely be a compromise, dependent, for instance, on whether the computations needed to conceal one's rationality are inherently harder than those needed to detect such concealment."

Whoah, whoah, since when did cheating and catching it become a race of computation? Maybe an arms race of finding and concealing evidence, but when does computational complexity enter the picture? Second of all, the whole section about the Darwinian arms race makes the (extremely common) mistake of conflating evolutionary "goals" and individual desires. There is a difference between an action being evolutionarily advantageous, and an individual wanting to do it. Never mind the whole confusion about the nature of an individual human's goals (see http://lesswrong.com/lw/6ha/the_blueminimizing_robot/).

One side point is that the way she presents it ("Emotions are the mechanisms by which reason, when it pays to do so, cripples itself") is essentially presenting the situation as Newcomb's Paradox, and claiming that emotions are the solution, since her idea of "rationality" can't solve it on its own.

"By contrast, Type-1 thinking is concerned with the truth about which beliefs are most advantageous to hold."

But wait... the example given is not about which beliefs are most advantageous to hold... it's about which beliefs it's most advantageous to act like you hold. In fact, if you examine all of the further Type-X levels, you realize that they all collapse down to the same level. Suppose there is a button in front of you that you can press (or not press). How could it be beneficial to believe that you should push the button, but not beneficial to push the button? Barring of course, supercomputer Omegas which can read your mind. You're not a computer. You can't get a core dump of your mind which will show a clearly structured hierarchy of thoughts. There's no distinction to the outside world between your different levels of recursive thoughts.

I suppose this bothered me a lot more before I realized this was a piece of fiction and that the writer was a paranoid schizophrenic (the former applying to most else of what I am saying).

"Ah, yet is not dancing merely a vertical expression of a horizontal desire?"

No, certainly not merely. Too bad Elliot lacked the opportunity (and probably the quickness of tongue) to respond.

"But perplexities abound: can I reason that the number of humans who will live after me is probably not much greater than the number who have lived before, and that therefore, taking population growth into account, humanity faces imminent extinction?..."

Because I am overly negative in this post, I thought I'd point out the above section, which I found especially interesting.

But the whole "Flowers for Algernon" ending seemed a bit extreme...and out of place.

Comment author: MugaSofer 10 January 2013 11:30:03AM *  1 point [-]

she can conclude with >75% certainty that any male wants to fuck her.

... she can? Really? That seems pretty damn high for something as variable as taste in partners.

EDIT: wait, that's a reference to how many guys on a university campus will accept offers of one night stands, right? It's still too high, or too general.

Comment author: jeremysalwen 12 January 2013 07:39:44AM 2 points [-]

It's also irrelevant to the point I was making. You can point to different studies giving different percentages, but however you slice it a significant portion of the men she interacts with would have sex with her if she offered. So maybe 75% is only true for a certain demographic, but replace it with 10% for another demographic and it doesn't make a difference.

Comment author: MugaSofer 13 January 2013 10:32:35AM 2 points [-]

Oh, it certainly doesn't affect your point. I agree with your point completely. I was just nitpicking the numbers.

Comment author: [deleted] 09 January 2013 11:57:55PM 2 points [-]

This post and the linked story scared the heck out of me. Thanks for the thought-provoking material.

Comment author: MugaSofer 11 January 2013 10:07:20AM *  0 points [-]

I suspect the vast majority of Overcoming Bias readers could not achieve the "happiness of stupidity" if they tried. That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see.

The happiness of stupidity is closed to you. You will never have it short of actual brain damage, and maybe not even then. You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not. That way is closed to you, if it was ever open.

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.

So, to be clear, you don't think that such neurohacking as presented in the story is possible?

That said, I think you've found a pretty convincing argument that we shouldn't accept the tradeoff, even if it's available. That is one scary piece of writing.

Comment author: Algernoq 19 August 2014 02:02:30AM 0 points [-]

Relevant: Paul Graham, Why Nerds are Unpopular

Paul Graham argues that a nerd is anyone not primarily focused on popularity, and that nerds lose the competitive and zero-sum game of popularity to those who aren't distracted by things like studying. After nerds enter the real world, however, they can form their own special-interest communities and often do very well.

Regarding Aaronson's piece, ditziness as signaling makes sense. However, the protagonist failed to see other options: she could have "won" by making the first moves to date an attractive but passive/malleable and socially clueless boy. She could have really "won" by stringing along several passive/malleable/clueless boys. Instead, she sold her soul to stay with the next random guy who asked her out after her "realization", because being alone was more painful. She didn't realize that her understanding of evolutionary theory and rationality failed to make up for her lack of domain knowledge about dating/relationships.

Comment author: TGGP4 14 September 2007 08:46:07PM 4 points [-]

"believing you're happy" and "in fact happy" strike me as distinctions without distinction. How are they falsifiable?

Comment author: Acidmind 20 August 2012 10:45:55AM -1 points [-]

By comparing a written self-evaluation and serotonin and dopamine levels in ones brain, perhaps?

Comment author: hannahelisabeth 11 November 2012 10:19:08PM 3 points [-]

How would you calibrate a brain scan machine to happiness except by comparing it to self-evaluated happiness? You only know that certain neural pathways correspond to happiness because people report being happy while these pathways are activated. If someone had different brain circuitry (like, say, someone born with only half a brain), you wouldn't be able to use this metric except by first seeing how their brain pattern corresponded to their self-reported happiness. It seems to me that happiness simply is the perception of happiness. There is no difference between "believing you're happy" and "being happy." You can't be secretly happy or unhappy and not know it, 'cause that wouldn't constitute happiness.

Comment author: Peterdjones 11 November 2012 10:43:48PM 0 points [-]

There's no self-deception, then?

Comment author: hannahelisabeth 12 November 2012 08:40:18AM 0 points [-]

Only retroactively. Our memories are easy to corrupt. But no, I don't think you can be happy or unhappy at any given moment and simultaneously believe the opposite is true. There's probably room for the whole "belief in belief" thing here, though. That is, you could want to believe you're happy when you're not, and could maybe even convince yourself that you had convinced yourself that you were happy, but I don't think you'd actually believe it.

Comment author: Peterdjones 12 November 2012 10:18:40AM 0 points [-]

You haven't given any evidence for those claims. At one time it was believed that minds were indestructible, atomic entities, but now we know we have billions of neurons there is plenty of scope for one neuronal cohort to believe or feel things that another does not.

Comment author: hannahelisabeth 13 November 2012 03:48:31PM 0 points [-]

Sure, that's true. I suppose you could have a split-brain person who is happy in one hemisphere and not in the other, or some such type of situation. I guess it just depends on what you're looking for when you ask "is someone happy?" If you want a subjective feeling, then self-report data will be reliable. If you're looking for specific physiological states or such, then self-report data may not be necessary, and may even contradict your findings. But it seems suspect to me that you would call it happiness if it did not correspond to a subjective feeling of happiness.

Comment author: Kindly 12 November 2012 01:54:59AM 0 points [-]

It's hard to be mistaken about how happy you are at the precise moment you're asked the question (you might have trouble reporting exactly how happy you are, but that's different). However, if you want to know how happy you've been over the past month, for example, it's possible to be wrong about that; you could be selectively remembering times you were more or less happy than average.

Comment author: hannahelisabeth 12 November 2012 08:37:43AM 0 points [-]

True. Still, the method of measuring serotonin and dopamine levels would offer no benefit over a self-evaluation, since you can't implement it retroactively.

Comment author: Tom_Breton 14 September 2007 08:51:22PM 10 points [-]

What if self-deception helps us be happy? What if just running out and overcoming bias will make us - gasp! - unhappy?

You are aware, I'm sure, of studies that connect depression and freedom from bias, notably overconfidence in one's ability to control outcome.

You've already given one answer: to deliberately choose to believe what our best judgement tells us isn't so would be lunacy. Many people are psychologically able to fool themselves subtly, but fewer are able to deliberately, knowingly fool themselves.

Another answer is that even though depression leads to freedom from some biases and illusions, the converse doesn't seem to apply. Overcoming bias doesn't seem to lead to depression. I don't get the impression that a disproportionate number of people on this list are depressed. In my own experience, losing illusions doesn't make me feel depressed. Even if the illusion promised something desirable, I think what I have usually felt was more like intellectual relief, "So that's why (whatever was promised) never seemed to work."

Comment author: FrancesH 04 December 2010 08:55:01PM 7 points [-]

Agreed. I always feel profoundly relieved and even moderately triumphant.

Comment author: Acidmind 20 August 2012 10:51:19AM 2 points [-]

I can even experience a slight stroke of euphoric lunacy upon the shattering of my delusions. Somehow the world seems to burn brighter without the blurry lenses that biases provide.

Comment author: hannahelisabeth 11 November 2012 10:30:09PM 0 points [-]

I'd heard of the connection between depression and more accurate perceptions (notably, more accurate predictions due to less overconfidence), but I wasn't aware of the causal direction. It had been portrayed to me as being that the improved perception of reality was the cause of the depression. Or maybe I just mistakenly inferred it and didn't notice. I didn't know it actually went the other way, though now that I think about it, that actually makes a lot of sense.

Personally, I find that imroved map-territory correspondence leads to more happiness, at least the improved rationality which results from learning Rational Emotive Behavior Therapy. It's not just losing illusions that helps. It's better understanding yourself, better understanding what is actually causing your emotions, and realizing that you have a more internal locus of control rather than external regarding your emotions. It's liberating to be able to stop an emotional reaction in its tracks, analyze it, recognize it as following from an irrational belief, and consequently substitute the irrational emotion for a rational one. It helps especially with anger and anxiety, as those have a tendency to result from irrational, dogmatic beliefs.

Comment author: pdf23ds 14 September 2007 09:04:51PM 2 points [-]

Depression is specifically linked to reducing overconfidence. People more accurately assess their own abilities (and perhaps others' abilities as well). I'm not aware that it's linked to decreasing other biases.

Comment author: g 14 September 2007 09:20:41PM 4 points [-]

"How happy is the moron: / He doesn't give a damn. / I wish I were a moron. / -- My God, perhaps I am!"

Or, in other words, wanting to be stupid is itself a form of stupidity.

Comment author: James_Bach 14 September 2007 09:29:48PM 3 points [-]

I'm pleased to say that, through a great deal of study and practice, I *have* learned how to unlearn things that I know. This is called skepticism. A key to it is the ability to imagine plausible alternatives to whatever is believed. Descartes is famous for developing this idea, although he was constrained by his society from completely embracing it. Pyrrho and Sextus Empiricus developed this idea, but their community was persecuted and destroyed by the Christians, too.

Skepticism is not opposed to rationality, but neither does it accept that a rationally derived solution to a problem is *necessarily* the best solution (unless you define rationality as whatever leads to the best solution, in which case you have to abandon the notion of a rational methodology).

My wife is an ongoing experiment and example for me, because she seems to live her life almost entirely without rationality and critical thinking as I recognize it. She lives instead by pattern matching and by the process of comparing real and anticipated feelings. You feel superior to her. Well, she feels superior to you. Is there a non-biased process that can decide who is right? Sure there is: mutation and natural selection. My wife is the product of billions of years of evolution, as are you. So, it seems to be a tie...

I like being "smart" and "analytical". It's my kind of game. I find symbolic logic fascinating. I write software using my logical mind. I enjoyed reading your wonderful tutorial on Bayesian reasoning, though I already knew the material, having read the Cartoon Guide to Statistics and the works of Tversky and Kahneman, years ago. But not since 1920 or so has it been possible to make a fully rational case for living a fully rational life. To do that you have to base your reasoning on premises, and that leads to the infinite regress problem. You have to map your premises to reality, but you don't have direct access to reality.

I'm not attacking rationality. I love it. But why be biased in favor of it? Why not just do what works for you and leave it at that?

Comment author: adamisom 18 July 2011 02:56:57PM -1 points [-]

Because being rational isn't just something fun to play with. It's aiming to correspond your beliefs and actions with reality, which will eventually catch up. Nothing you've said here indicates that you actually have read this blog.

Comment author: Desrtopa 18 July 2011 03:08:38PM 3 points [-]

To be fair, this comment was made before most of the blog had been written.

Comment author: Tiiba2 14 September 2007 10:19:09PM 0 points [-]

This thing about depressed people being unbiased makes no sense to me. Maybe they're not overconfident, but aren't they underconfident instead? I'd find it pretty surprising if a mental illness was correlated with common sense.

Anyway, perhaps the key to being rational and happy is suppressing not facts, but fear of them. No, you can't have a pony. Get over it.

Comment author: hannahelisabeth 11 November 2012 10:37:32PM 1 point [-]

I think it's not underconfident because our over-confidence is so high that it really is hard to be pessimistic enough to match reality. Depressed people seem to have just enough pessimism to compensate (but not overcompensate) for this bias. I don't think that necessarily makes them have more common sense. Even just in terms of being more realistic, this is only one bias that they compensate for. It's not like depression magically cures any of the other biases.

Depressed people also have a tendency to have an external locus of control, and that is not necessarily rational. You may not be able to control the situations you're in, but it's often the case that your actions do have a significant impact on them, so believing that you have very little or no control is often not rational.

Comment author: g 14 September 2007 10:38:07PM 4 points [-]

Tiiba: "makes no sense" and "would be surprising" are very different things, and the former is excessive for the claim about depressed people. The level of confidence that's optimal for making correct predictions about the world could be much lower than the level that's optimal for living a happy life. Do you have some way of knowing that it isn't?

(Let me forestall one argument against by remarking that evolution is not in the business of maximizing our happiness.)

Comment author: paul3 14 September 2007 11:33:26PM -3 points [-]

This post strikes me as being pretty arrogant. Actually the whole blog tends in that direction, but this post especially, where the author finally makes clear the dichotomy between the readers of this blog (the uber-rationalists) and everyone else (the stupid).

When your world view causes you to believe that everyone who is not single-mindedly pursuing your worldview is stupid, I think you should treat that as a warning sign of bias. Even if your world view is about overcoming bias.

Comment author: Eliezer_Yudkowsky 14 September 2007 11:39:31PM 8 points [-]

Um, there are readers of this blog, and there are people who enjoy the "happiness of stupidity" (which is not the same as just having a low IQ; it involves other personality traits as well). I don't think there's much overlap between those two groups. But they are far from being the only two groups in the world, and there is no dichotomy between them.

Comment author: Lethalmud 04 July 2013 10:34:57AM 2 points [-]

This is interesting. When I first discovered LW, I was reading The Praise of Folly by Erasmus. He argues, among other things, that all emotions and feelings that make life worthwhile are inherently imbedded in stupidity. Love, friendship optimism and happiness require foolishness to work. Now is it very hard to compare a sixteenth century satirical piece with a current rational argument, but I have observed that intelligence and stupidity don't seem to be mutually exclusive. From where comes your assumption that intelligent, rational people can't be stupid? Emotions don't tend to be rational, and in the force of a strong one like love even the most intelligent and rational person can turn into an optimistic fool, sure that their loved one is infinitely more trustworthy than the average human, and statistics on adultery don't apply in this case. Should you try to overcome the bias of strong emotions? Can you overcome it at all? I have never seen someone immune to it. So maybe the happiness of stupidity is still available to all of us.

Comment author: Hopefully_Anonymous 14 September 2007 11:41:55PM 1 point [-]

My understanding is that happiness is a product of biochemistry and neuroanatomy, and doesn't have to inherently correlate with any knowledge, experience, or heuristic.

Comment author: Davidmanheim 20 January 2011 03:46:45AM 0 points [-]

First, it makes no sense to claim that there is no connection between experiences and biochemistry; clearly some experiences cause certain biochemical reactions. Eating or sleeping have clear neurochemical results. It doesn't need to "inherently correlate," it certainly does, however, correlate. The logic behind it could be obscured, but that does not imply that they cannot be used to test and establish causality. That's why we attempt to better approximate rationality, to find how reality does and doesn't work.

To answer the substantive point, knowledge, experience, and heuristics are emergent properties of biochemistry and neuroanatomy; of course there is a relationship between the substrate and the emergent properties. The precise nature of the interaction in a complex system can be deduced either through correlation of different systems and the behavior of the substrate, in the way gross neuroimaging locates the proximate location of whatever stimulus is being studied, or better, a full understanding of how the system works, which we do not have.

With a full understanding, we could discuss the necessary or inherent correlation, but until then, we can reasonably discuss only the actual behavior of the system. So the question of whether certain knowledge or experiences cause happiness is a reasonable one.

Comment author: bigjeff5 28 February 2011 02:54:49AM 1 point [-]

My understanding is that happiness is a product of biochemistry and neuroanatomy, and doesn't have to inherently correlate with any knowledge, experience, or heuristic.

Hopefully Anonymous never claimed there was no connection between experiences and biochemistry, only that the two weren't inherently linked.

If they were inherently linked, then you could not have happiness without certain experiences, and those same experiences would always increase your happiness. Personal experience and the fact that clinical depression exists tells me this cannot be true. The fact that a chemical imbalance alone can eliminate your happiness completely regardless of what your actual experience may be is proof that happiness is primarily a function of biochemistry.

The fact that certain experiences make us more happy shows that experiences can influence our biochemistry, but the two are most certainly not inherently linked.

Comment author: Ryan_Holiday 14 September 2007 11:43:18PM 0 points [-]

Does having an explanatory style personality (ie delusional optimism) lead to reduced rates of depression and increased happiness?

http://www.psych.nyu.edu/oettingen/OETTINGEN1995EXPLANATORY.PDF

Comment author: Michael4 15 September 2007 12:58:30AM 5 points [-]

"Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen."

Have you talked to any religious people lately? "Oh, the tornado ripped my neighbors house off the foundations, but we were spared. I guess God was looking out for us!"

Could anyone say that without willfully blinding themselves? Do they really think they are better people than their neighbors, and that God moved the tornado away from their house? Yet you hear stuff like this all the time. And I think they really believe it.

The ability to delude ourselves seems to be one of our main survival traits. Rational people would never take the stupid chances that result in progress. Evolution has favored a species that buys lottery tickets.

Comment author: Robin_Hanson2 15 September 2007 02:18:50AM 5 points [-]

Surely, true wisdom would be second-order rationality, choosing when to be rational. ... You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception. The other alternative is to choose blindly to remain biased, without any clear idea of the consequences. This is ... willful stupidity.

This isn't quite fair. While it is true that you couldn't know the detailed consequences of being biased, you could make a rational judgment under uncertainty, given what you do know. And it should be possible to for your best judgment in this situation to be that you are better off biased. Of course this mere possibility does not mean that you are in fact better of being biased.

Comment author: Eliezer_Yudkowsky 15 September 2007 03:53:25AM 4 points [-]

While it is true that you couldn't know the detailed consequences of being biased, you could make a rational judgment under uncertainty, given what you do know.

Yes, but for it to be a rational judgment under uncertainty, you would have to take into account the unknown unknowns, some of which may be Black Swans (where rare events accounts for a significant fraction of the total weight), plus such well-known biases as overconfidence and optimism. Think of all that worrying you'll have to do... maybe you should just relax...

My own life experience suggests that any black box should be assumed to contain a Black Swan. (Or to be precise, a substantial probability of such, rather than probability 1.0.)

Comment author: Rob_Spear 15 September 2007 05:52:59AM 0 points [-]

State legitimacy is similarly based on such self deception, whether it uses the traditional "'cos God says so" approach, or the more modern, "'cos we won a popularity contest." idea: in neither circumstance is there any real reason why people in general should act as if the state has the right to make laws and manage people, and yet it does, apparently to the general good unless you happen to be a radical libertarian.

Surely this is the same as the happiness case: by having most people in a nation sharing the delusional belief in the legitimacy of the state, the nation as a whole benefits.

Comment author: Robin_Hanson2 15 September 2007 08:25:36AM 4 points [-]

Eliezer, we are in essence talking about a value of info calculation. Yes, such a calculated info value rises with rare important things you might know if you had the info. But even so it is not guaranteed that info will be worth the cost. Similarly, it is not guaranteed that our choosing to avoid bias will be worth the costs.

It seems to me simpler to just say that given our purposes we judge better overcoming our biases to in fact be cost-effective on the topics we emphasize here. The strongest argument for that seems to me that we emphasize topics where our evolved judgments about when we can safely be biased are the least likely to be reliable guides to social, as opposed to personal, value.

Comment author: J_Thomas 15 September 2007 01:04:04PM 10 points [-]

...you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy." But we do not have such direct control over our beliefs. You cannot make yourself believe the sky is green by an act of will.

In my experience, this is not true.

My father was a dentist, and when I was 7 he learned hypnosis to use to anesthetise his patients. Of course he practiced on me while he was learning. (As it turned out, he did successful anesthesia with it for a few years before people started spreading stories that hypnosis was dangerous mind-control and he quit.)

With posthypnotic suggestion people can easily believe things that they have no reason to believe, remember things they did not experience, and ignore their senses up to a point. I've done it. It all feels real.

I learned to hypnotise people a little, and I learned how to do it on myself. It certainly can be done. You do have that control over your beliefs, if you're willing to use it.

Which is not to say it's a good idea. IME the main time it's useful to make yourself believe something is when you have nothing to lose by burning your bridges, when you lose everything anyway if the belief is wrong. Then you might as well believe it wholeheartedly.

I've read that interest in hypnosis has something like an eleven year cycle. People start to think there's something interesting there. They start studying it, and get some fascinating results that look some ways powerful. Then as they keep studying they find that all the unexpected things people can do under hypnosis they can also do without hypnosis. And then they start to see that a lot of people are basicly walking around hypnotised a lot of the time. They start to wonder what exactly they're studying, and they quit, and after the subject lies fallow awhile more people get interested and it starts again.

Basicly all it takes for hypnosis is that the person relax and listen uncritically. If they're willing to believe what they're told, they're hypnotised. All the peculiar abilities people sometimes display when told to under hypnosis, are things they could do but normally don't believe they can do. When they give up their scepticism they go ahead and do their best instead of doubting themselves and hesitating. They're willing to believe delusions for somebody they trust, and when the limits of the trust show up or they get emphatic evidence against the delusion, then they rethink.

You really can deceive yourself. You can build false memories and believe them. You can make the sky look a little green, particularly on a cloudy day, and you can build on that until it looks pretty green -- provided the idea of a green sky doesn't offend you too much. If you believe it's impossible you can't see it. If it's "I didn't know that was even possible, I wonder why it's happening now?" then you can.

These are things that anybody can learn to do. But I mostly agree with your arguments that it is not generally a useful skill. If I get a toothache I don't anesthetise it until after I get my dentist appointment, and if I miss the appointment the pain comes back. Pain is your signal that something is going wrong with your body, and in general it's a bad idea to ignore that.

Comment author: David_Gerard 19 January 2011 09:01:12AM *  8 points [-]

False memories are horrifyingly easy to induce. Here is a Scientific American story on the subject from 1997, and here is a scary story from an ex-Scientologist about how to induce false memories using Scientology auditing. "Up to this day, I intellectually know that this story was a fiction written by a friend of mine, but still I have it in vivid memory, as if I was the very person that had experienced it. I actually can't differentiate this memory from any other of my real memories, it still is as valid in my mind as any other memory I have."

Human memories are untrustworthy. This leads to a philosophical dilemma about whether or not to trust your memory, and how much, and what you're supposed to use if you can't trust your memory.

Comment author: hannahelisabeth 11 November 2012 11:18:59PM 1 point [-]

Not everyone can be hypnotized. About a quarter of people can't be hypnotized, according to research at Stanford.

I've tried to be hypnotized before and it didn't work. I think I'm just not capable of making myself that open to suggestion, even though I would have liked to have been hypnotized.

I heard from one of my psychology professors that those on the extreme ends of the IQ spectrum (both high and low) have more trouble being hypnotized, but I'm not sure if this is actually true. The Stanford research showed that hypnotizability wasn't correlated with any personality traits, but I probably wouldn't consider IQ a personality trait.

Comment author: Hopefully_Anonymous 15 September 2007 01:33:16PM 0 points [-]

"Evolution has favored a species that buys lottery tickets."

It's (statistically) bad for the individual but good for the species. Although even buying lottery tickets -or the other natural equivalents is probably deoptimized behavior. I imagine there's some bayesian optimized approach for a species and the spectrum of risk taking its members would engage in. In contrast I suspect our species performs functionally rather than optimally.

Comment author: Tiiba2 15 September 2007 03:58:11PM 4 points [-]

Forgive me, Master Eliezer, for I have sinned.

I have come to realize that inside my mind is not merely self-delusion, but a full-blown case of doublethink. There are two mutually exclusive statements that I simutaneously hold to be unquestionably true. Here they are:

1) I should not cause suffering to others. 2) Only my own happiness really matters.

I can even explain this doublethink. I am naturally selfish, but society makes me be good. I could try to believe that only I matter, and do good things only for the show, but that strategy doesn't work for most people. Being good is too complex.

This doublethink creates intresting effects. When I read about context insensitivity, I wondered if that's really a bias, or just apathy masquerading as concern. I'd probably give the same amount to save five birds as I would to save Atlantis from sinking. Both are social acts.

I also wonder about coherent extrapolated volition. What will it find when it extrapolates us? That we all want the whole pie? That we would gladly exterminate everyone else if we could get away with it?

Comment author: Eliezer_Yudkowsky 15 September 2007 05:04:06PM 1 point [-]

"Evolution has favored a species that buys lottery tickets."

It's (statistically) bad for the individual but good for the species.

This is a group selection argument. (If you don't know what that means, it's something that biologists use to scare their children.) Evolution does not operate on species. It operates on individuals. Genes that are statistically bad for individuals drop out of the gene pool no matter what they do for the species.

This is an ancient and thoroughly discredited idea. See George Williams's "Adaptation and Natural Selection."

Comment author: kaimialana 26 July 2010 10:17:44PM 5 points [-]

Actually, there can be multi-level selection (MLS theory; cf. http://en.wikipedia.org/wiki/Group_selection#Multilevel_selection_theory) when there is competition between groups. In the same sense there is selection between individuals when there is competition between individuals, or the competition between genes popularized by Richard Dawkins.

http://www.americanscientist.org/my_amsci/restricted.aspx?act=pdf&id=16386020847008http://www.americanscientist.org/my_amsci/restricted.aspx?act=pdf&id=16386020847008 is a good primer.

This is the best solution for Darwin's problem of ant colonies, even better than haplodiploidy. I thought I would come out of lurking while reading through the sequences to mention this, since multi-level selection was demonized during the 70s under the name "group selection" due to some overzealous proponents. So, while we would not say "evolution has favored a species that buys lottery tickets", we might hypothesize evolution favors human societies that buy lottery tickets when under competition with other societies that do not (as an example).

Comment author: Hopefully_Anonymous 15 September 2007 05:36:43PM 1 point [-]

Eliezer, I mentioned behaviors/biases that are statistically bad for the individual, not genes. Also, I'm interested in your take on the idea that the existence of humans with a range of different biases can be good for other humans, even if it's not optimal from the perspective of the person with the bias.

Comment author: Tiiba2 15 September 2007 05:38:19PM 0 points [-]

"When I read about context insensitivity, I wondered if that's really a bias, or just apathy masquerading as concern. I'd probably give the same amount to save five birds as I would to save Atlantis from sinking. Both are social acts."

I want to clarify. I do believe in context insensitivity, but think indifference was also a factor in the donation case.

Comment author: J_Thomas 15 September 2007 06:51:06PM 0 points [-]

Genes that are bad for many of the individuals that carry them but that have large jackpots can be selected. As for how you tell whether the occasional large jackpot makes up for the common failure, it takes a long time to tell.

With lotteries you can judge by the house. They're in business to make money, they have wealth that they got from previous lotteries, it makes sense the odds are against you in the longterm. But that reasoning doesn't work in general.

Human beings who see jackpot events happen will sometimes gamble for long times without winning a jackpot. If they didn't they couldn't win. They lose cumulatively while they wait. It takes as long time to find out by trial and error whether they win on average or not, and if they don't try long enough they don't find out what the odds really are.

Comment author: TGGP4 15 September 2007 07:34:57PM 0 points [-]

Eliezer, do you concede that there is no difference between "believing you're happy" and "really being happy"?

HA, I was surprised you stumbled into that one. A good introductory example of how evolution does not optimize at the species but at the gene-level can be found here. It is by Richard Dawkins, who is also known for the term "meme", which is an idea that can be analyzed like a gene. Unless the meme that buying lottery tickets is a good idea is beneficial for those that hold it, we should not expect it to become prevalent even it if benefits the species. You can find other good posts from Razib on "group selection" if you look for them.

Comment author: Eliezer_Yudkowsky 15 September 2007 07:42:13PM 8 points [-]

Eliezer, do you concede that there is no difference between "believing you're happy" and "really being happy"?

No. There is a difference between believing you love your stepchildren and loving your stepchildren, between believing you're deeply upset about rainforests and being deeply upset about rainforests, and between believing you're happy and being happy.

As soon as you turn happiness into an obligatory sign of spiritual health, a sign of virtue, people will naturally tend to overestimate their happiness.

Falsifiable difference? Put 'em in an fMRI or use other physiological indicators.

Comment author: Peacewise 29 October 2011 10:55:42AM 1 point [-]

Perhaps the TED lecture by Dan Gilbert might cast some illumination upon whether there is a difference between believing you're happy and really being happy.

http://www.ted.com/talks/lang/eng/dan_gilbert_asks_why_are_we_happy.html

Sounds to me like what's being discussed is : is synthetic happiness the same as happiness. Dan Gilbert argues that they are the same.

Comment author: Peacewise 30 October 2011 12:14:15AM -3 points [-]

What's the -1 for please?

Comment author: Alicorn 30 October 2011 12:52:16AM *  4 points [-]

Please don't ask this for every comment of yours that is downvoted, at least until you can reliably make comments that aren't downvoted. It clutters the recent comment threads.

(ETA: I posted this in response to two of the same query being issued in a row. I don't object to people asking why they were downvoted when it's occasional.)

Comment author: Peacewise 30 October 2011 01:31:49AM 2 points [-]

Perhaps you should read the

http://lesswrong.com/lw/2ku/welcome_to_less_wrong_2010/

page, where it is stated

"However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.)"

I'm doing what is suggested as the etiquette.

Comment author: Manfred 30 October 2011 03:27:39AM *  4 points [-]

Goodness, I for one would dislike it if people started doing that all the time (sometimes, it says, which is an apparently informative way of saying "Between 0 and 100% of the time").

The downside of doing it often is that it makes people feel like you're asking for an explanation without putting in any noticeable effort to understand. Writing things that are nice to read generally does take effort. I would recommend only asking if you are genuinely confused after a good sixty seconds of uninterrupted thought on how other people could have perceived your post. And, of course, lurking moar is good advice.

Comment author: Peacewise 30 October 2011 04:01:41AM -1 points [-]

Fair enough Manfred, I respect your feeling of dislike on this position, but I disagree with its lack of rationality.

I did put in more than 60 seconds of effort trying to understand why it's a -1, and couldn't come up with something that didn't include my own bias. So I wanted to both understand what the -1 was for and test to see if my inclination is true or not. So far my bias is telling me it's an example of "have a go at the new guy on the block." - I hold only very lightly to this and will enjoy being proven incorrect by having the -1 explained.

It's commonly accepted that the most challenging time for a new group member is their beginning with the group and it's also known that constructive feedback helps with that challenge.

Does a member of rational group want to provide rational feedback? Observationally quite a few do not.

If I never (or rarely) question the -1, or never or rarely receive any more feedback that -1, then I will struggle, or may not ever understand what the -1 is for. I consider myself to be intellectually honest in asking "what's wrong with this?", because during the process of writing the post, I'm already asking "what's wrong with this?" and so perhaps someone with more knowledge gives me a -1 and I'd appreciate being informed "what's wrong with this.", for in being informed I can potentially implement better self editing procedures, that is I can improve my rationality.

Now if they don't have the time to answer that question, ok, I'll consider on my own what the -1 is for (again!) and then it's more likely I'll come up with an answer that has some amount of "reasoning" based upon my own biases. Now the sequences I've read so far imply that biases are something that people should attempt to perceive and challenge, so I believe that I am being consistent with the sites inclination towards rationality by asking the question, both to challenge and improve my own understanding and do the same for the person who has given me a -1, indeed also for those who witness the exchange also.

Comment author: lessdazed 30 October 2011 04:29:04AM 3 points [-]

couldn't come up with something that didn't include my own bias.

What does this mean? That you couldn't come up with something that didn't include the other person's being stupid or innately evil?

Does a member of rational group want to provide rational feedback?

Oh, my word!

the -1

That does not represent a systematic negative reaction to your post or even consensus disagreement.

Comment author: lessdazed 30 October 2011 05:16:59AM *  0 points [-]

I think that asking why a comment was downvoted would be legitimate even more than two times in a row, were the downvoted comments downvoted more than once.

For comments that were only downvoted once, it is not usually a question worth asking. So I agree with the literal reading of the original "don't ask this for every comment of yours that is downvoted" more than the clarification.

Comment author: wedrifid 30 October 2011 03:43:41AM 1 point [-]

I really can't figure that out myself. The comment doesn't seem to be annoying, irrelevant, rude or stupid. (Dan Gilbert's is wrong all the same.)

Comment author: lessdazed 30 October 2011 04:15:49AM *  1 point [-]

It's probably because Gilbert conflates happiness and utility.

Comment author: Unnamed 30 October 2011 05:39:32AM 6 points [-]

I didn't downvote, but since the post is mostly just a link to a video my guess is that it's somebody signaling that the video isn't worth watching.

When the main content of a comment is a link, votes get used to indicate whether the link is worth following. This is especially relevant when the link is to a video, which involves a large time commitment as it is not skimmable. If the content of the video doesn't justify the time commitment, then downvotes tell other readers not to waste their time on the link (and warn the poster not to waste people's time with such links).

Comment author: lessdazed 30 October 2011 06:10:22AM *  0 points [-]

In my opinion, it's worth watching as a best presentation of a wrong idea that doesn't attempt to engage the correct one. It's also worth watching because it compiles interesting true facts and merely draws wrong conclusions from them, though correct conclusions would also be interesting and some of his intermediate conclusions are fine.

Comment author: J_Thomas 15 September 2007 08:01:21PM 0 points [-]

Unless the meme that {buying lottery tickets is a good idea} is beneficial for those that hold it, we should not expect it to become prevalent even it if benefits the species.

But it is prevalent. And on average people lose money at it, while the occasionaly winners tend not to do well.

So it's natural to suppose that the meme for buying lottery tickets is a perversion of some other functional meme.

Here's a way that lotteries could be functional after all for people in extended families. If you sacrifice and save and start to build up a little capital, you may be accosted by distant relatives in need who have the right to your assistance, and it all drains away. But when you win the lottery you can go live in some distant place and only share enough to stay in distant good standing. When building up savings is considered immoral, paying 40% on average might not seem so bad for a chance to get some capital anyway.

Comment author: James_Bach 16 September 2007 01:36:47AM -1 points [-]

"Evolution does not operate on species. It operates on individuals. Genes that are statistically bad for individuals drop out of the gene pool no matter what they do for the species."

Imagine a gene that caused 9/10 of the humans who have it to be twice as fertility and attractiveness as the population that did not have it, while 1/10 of the humans who have it can't reproduce at all. This would be a gene that would serve the species (i.e. the portion of the species that had it), even though it would harm some individuals. Notice that the inability of the 10% to procreate would not harm the prospects of such a gene for the species as a whole. Soon, the whole of the species would have this gene.

Isn't there some theorizing that suggests that homosexuality may be an example of something like this? Perhaps the phenomenon of homosexuality is linked to some wonderful benefit that increases the viability of heterosexuals. Otherwise, wouldn't homosexuals have been "selected out" long ago?

Comment author: razib 16 September 2007 05:44:40AM 2 points [-]

Imagine a gene that caused 9/10 of the humans who have it to be twice as fertility and attractiveness as the population that did not have it, while 1/10 of the humans who have it can't reproduce at all.

this is means that the allele (genetic variant) increase fitness by a factor of 1.8. this is not a "species level" benefit in anything but a tautological way. higher levels of selection or dynamic processes are only interesting if they can not be reduced down to a lower level. e.g., you can increase the fitness of the group by simply increasing the fitness of individuals which compose the group. this increases the fitness of the group, but it is easily reduced toward increasing the fitness of individuals. in other cases you can not decompose the group fitness to individuals and so there is grounds for saying that the excess fitness which is gained by having a group, or evaluating a group, is something that is "for the good of the group."

to use a sports analogy, if you brought together an all-star team you'd get a better team, not because of the team dynamics but because the individual players are so much better. in contrast, ther are teams which are very good because of group dynamics where utility players can specialize in their roles and synergistically perform far better than they might as individuals.

Comment author: razib 16 September 2007 05:49:56AM 0 points [-]

This is an ancient and thoroughly discredited idea. See George Williams's "Adaptation and Natural Selection."

i am generally skeptical of group selectionist arguments, but we are probably on the cusp of a renaissance in this area. it will be spearheaded by e.o. wilson, who has always been a "believer," but who now believes that group selection (or at least multi-level selection) has the empirical and analytical firepower to make a comeback. i am cautiously skeptical, but in the interests of honesty i think that "ancient and thoroughly discredited" is probably a better description for group selection circa 1995 than 2007. most evolutionary biologists are probably pretty skeptical of group selectionist arguments, but in large part it is because the models presented (which tend to avoid the pitfalls of the earlier arguments) are hard to test and seem analytically intractable beyond the simplest formulations.

Comment author: razib2 16 September 2007 06:06:51AM 1 point [-]

Imagine a gene that caused 9/10 of the humans who have it to be twice as fertility and attractiveness as the population that did not have it, while 1/10 of the humans who have it can't reproduce at all.

btw. you don't have to imagine. sickle cell is like this. a proportion of the population gets increased benefit from having the gene, and a proportion gets decreased benefit, in the ratio of heterozygotes (those who carry one sickle cell allele and one normal) and homozogytes (those who carry two alleles), i.e., 2pq:q^2. that's not species selection, it's standard balancing selection upon one gene.

Comment author: Doug_S. 16 September 2007 06:12:25AM 0 points [-]

What's wrong with group selection? All you need is for the benefit to the individual of being in a group in which trait X is sufficiently common to be sufficiently bigger than the benefit of not having trait X in the individual... or am I confused?

Comment author: Michael_Rooney 16 September 2007 07:25:15AM 1 point [-]

You know, self-deception has attracted some inquiry already.

Comment author: g 16 September 2007 09:41:50AM 0 points [-]

Doug, what's wrong with group selection is mostly that selection at the individual level works so much faster. If something's harmful to individuals, it's likely to have been wiped out by individual-level selection before it gets the chance to help the group.

It's possible to concoct scenarios where group-level effects win. For instance: some allele has no effect at all when heterozygous, but when homozygous it causes its bearer to become astonishingly altruistic. By the time there's much incidence of homozygosity in any given community, the chances are that the allele is (heterozygously) quite common, and then it's possible that the individual's altruism does more net good than harm to bearers of the allele. This is kin selection rather than group selection really, but on a different scale from the usual.

Or: some allele has a *very slight* deleterious effect on individual fitness in general-- slight enough that it typically takes, say, 100 generations before natural selection becomes visible over genetic drift. If it then has some group-level effect that prevents rare but group-destroying incidents (say, once every 100 generations someone without it will go nuts and kill everyone around them) then it could be selected for on balance simply because groups where it doesn't happen to get fixed in the population tend to die. Note that making this work is rather dependent on group size.

But it's pretty hard to concoct such scenarios that are actually *plausible*, and pretty hard to argue that anything in the real world looks much like them.

Comment author: J_Thomas 16 September 2007 01:21:00PM 0 points [-]

Doug S, G has given a good explanation (except possibly the last sentence which is debatable.) I'll explain again: Selection happens when genes increase in frequency compared to other genes. Since genes always happen inside individuals, a gene that causes its individuals to leave fewer offspring in the population will be selected *against*, regardless of what it does for the population as a whole.

A gene that results in good stuff for the population but that doesn't result in its own carriers increasing *more* than others won't increase in the population even though all the individuals in the population would be better off.

You can get by this a little by assuming a population split into breeding groups with limited outbreeding, where a gene that improves the group enough can take over in a small group, and then when the group gets bigger it splits and both groups increase compared to other groups, etc. But too much outbreeding would stop it. Something like this may happen in rats and fruit flies etc, too soon to be sure.

There could be specific genetic mechanisms that provide a system to to create group selection. Diploidy is a peculiar genetic mechanism, as are sexuality and dominance. There could be others that are less obvious, that benefit the populations that let them operate, and group selection is one of the things they might promote. But that's entirely speculative at this point.

Comment author: TGGP4 16 September 2007 09:19:59PM 0 points [-]

James Bach, if something has a frequency above 1% and has high fitness costs to those that hold it, it is probably pathogenic rather than genetic. You can find more on that from Greg Cochran at the bottom of this page.

Comment author: Raw_Power 12 October 2010 04:32:21PM 1 point [-]

You know, back in the old days, before I jumped on the Lesswrong train, I would say I willed myself into believing in God. Because whether he existed or not didn't change empirical conclusions: the world could have been created five minutes ago, intelligent design could have happened etc. etc.

But doing that made me angst and feel uneasy and there was something nagging at me. You know, those beliefs hurt, but it hurt even harder to get them out. I fought every inch for them. But when I lost, it was a relief, it felt like I had won.

Comment author: Carinthium 14 November 2010 03:44:27AM 2 points [-]

As far as I can tell, the weakness of the article is that it assumes one is deciding for oneself. One could decide to help others become irrational (on some issues) if you rationally decide it is best for them.

Comment author: ata 14 November 2010 05:24:25AM 2 points [-]

That's an even worse idea.

Comment author: Carinthium 14 November 2010 05:28:03AM 1 point [-]

It is if we accept the premise that it is best to be rational- I was pointing out that the article doesn't refute the argument it claims to.

Comment author: ata 14 November 2010 05:39:52AM 3 points [-]

I was referring more to the problems with other-optimizing. I'd estimate that it would be pretty dangerous to grant yourself permission to decide what delusions to instill in other people for their own good.

Comment author: Carinthium 14 November 2010 05:47:15AM 0 points [-]

Perhaps it's slightly overstating the case to claim that it is best to encourage delusions in others (I didn't know about the article, but having read it it appears accurate), but it is at least true that if (hypothetically) rationality turns out to generally be a net loss one could persuade other rationalists to prevent it spreading and decide not to spread it oneself.

(Or alternately try and prevent it spreading to those likely to lose from it)

Comment author: Carinthium 14 November 2010 03:44:29AM 1 point [-]

As far as I can tell, the weakness of the article is that it assumes one is deciding for oneself. One could decide to help others become irrational (on some issues) if you rationally decide it is best for them.

Comment author: [deleted] 08 December 2010 03:21:07AM 5 points [-]

I'm through with truth.

I never had a scientific intuition. In college, I once saw a physics demonstration with a cathode ray tube -- moving a magnet bent the beam of light that showed the path of the electrons. I had never seen electrons before and it occurred to me that I had never really believed in the equations in my physics book; I knew they were the right answers to give on tests, but I wouldn't have expected to see them work.

I'm also missing the ability to estimate. Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right. I always get that sort of thing wrong. Arithmetic estimation is even harder. Deciding how to bet in a betting game? Next to impossible.

Whatever mechanism is that matches theory to reality, mine doesn't work very well. Whatever mechanism derives expectations about the world from probability numbers, mine hardly works at all. This is why I actually can double-think. I can see an idea as logical without believing in it.

A literate person cannot look at a sentence without reading it. But a small child, just learning to read, can look at letters on a page without reading, and has to make an extra effort to read them. In the same way, a bad rationalist can see that an idea is true, without believing it. I can read about electromagnetism and still not expect to see the beam in the cathode ray tube bend. I spent ten years or so thinking "Isn't it odd that the best arguments are on the atheist side?" without once wondering whether I should be an atheist.

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

You know what I really wish I had? Team spirit. Absolute group loyalty. Faith. Patriotism. The sense of being in the right. In Hoc Signo Vinces. I have fleeting glimpses of it but it doesn't last. I want it enough that I keep fantasizing about joining the Army because it might work. I always wanted to be a fanatic, and my brain would never do it. But I'm starting to wonder if that's hackable; I'm sure enough sleep deprivation and ritual would do it.

Comment author: wnoise 08 December 2010 03:30:56AM 2 points [-]

Why would you expect it to come at the cost of some kind of screaming Cthulhu horror?

Comment author: [deleted] 08 December 2010 03:34:48AM 2 points [-]

I'm not sure. It's just that if it did I wouldn't go for it.

I know one person who's really well calibrated with probability, due to a lot of practice with poker and finance. When something actually is an x% probability, he actually internalizes it -- he really expects it to happen x% of the time. He's 80% likely to be right about something if he says he has an 80% confidence.

He doesn't seem too bad off. Busy and stressed, yes, but not particularly sad. Cheerful, even.

Comment author: JoshuaZ 08 December 2010 03:42:37AM 2 points [-]

I'm also missing the ability to estimate. Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right. I always get that sort of thing wrong. Arithmetic estimation is even harder. Deciding how to bet in a betting game? Next to impossible.

Whatever mechanism is that matches theory to reality, mine doesn't work very well. Whatever mechanism derives expectations about the world from probability numbers, mine hardly works at all. This is why I actually can double-think. I can see an idea as logical without believing in it.

Congratulations. You're just like most humans.

Comment author: [deleted] 08 December 2010 04:06:47AM 1 point [-]

Well, then why does he say self-delusion is impossible? It's not only possible, it's usual.

Comment author: JoshuaZ 08 December 2010 04:21:08AM 0 points [-]

I wasn't talking about that aspect (although I think he's wrong there also) but just about the aspect of not doing a good job at things like estimating or mapping probabilities to reality.

Comment author: [deleted] 08 December 2010 04:29:40AM 0 points [-]

I think it's really the same thing. Mapping probabilities to reality is sort of the quantitative version of matching degree of belief to amount of evidence.

Comment author: JoshuaZ 08 December 2010 05:01:55AM 1 point [-]

Possibly taboo self-delusion? I'm not sure that's what he means. Self-delusion in this context seems to mean something closer to deliberately modifying your confidence in a way that isn't based on evidence.

Comment author: RobinZ 08 December 2010 10:07:35PM 2 points [-]

I am under the impression that much of Eliezer Yudkowsky's early sequence posts were writted based on (a) theory and (b) experience with general-artificial-intelligence Internet posters. It's entirely possible that his is a correct deduction only on that weird WEIRD group.

Comment author: jimrandomh 08 December 2010 04:19:28AM 3 points [-]

I never had a scientific intuition. In college, I once saw a physics demonstration with a cathode ray tube -- moving a magnet bent the beam of light that showed the path of the electrons. I had never seen electrons before and it occurred to me that I had never really believed in the equations in my physics book; I knew they were the right answers to give on tests, but I wouldn't have expected to see them work.

Intuitively connecting mathy physics to reality isn't the default; you need to watch demonstrations and conduct thought experiments to make those connections. Your intuition got better that day.

Comment author: wedrifid 08 December 2010 04:21:33AM 0 points [-]

Draw a line on a sheet of paper; put a dot where 75% is. Then check if you got it right.

I tried that one and got it just about spot on. If you had asked me to estimate 67% now that may have been tricky. Estimating half twice in your head is kind of easy.

Comment author: RobinZ 08 December 2010 10:10:19PM 0 points [-]

If you had asked me to estimate 67% now that may have been tricky.

Move your estimation point until half the big side is the same as the little side. (Although I've practiced enough to do halves, thirds, and fifths pretty well, so I might just be overgeneralizing my experience.)

Comment author: wedrifid 08 December 2010 10:20:55PM 1 point [-]

Move your estimation point until half the big side is the same as the little side. (Although I've practiced enough to do halves, thirds, and fifths pretty well, so I might just be overgeneralizing my experience.)

Damn. I chose two random numbers and made a probability out of them. It seems I picked one of the easy ones too! :)

And yes, that algorithm does seem to work well for thirds. I lose a fair bit of accuracy but it isn't down to 'default human estimation mode' level.

Comment author: jimrandomh 08 December 2010 04:53:12AM 6 points [-]

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

You know what I really wish I had? Team spirit. Absolute group loyalty. Faith. Patriotism. The sense of being in the right. In Hoc Signo Vinces. I have fleeting glimpses of it but it doesn't last. I want it enough that I keep fantasizing about joining the Army because it might work. I always wanted to be a fanatic, and my brain would never do it. But I'm starting to wonder if that's hackable; I'm sure enough sleep deprivation and ritual would do it.

Absolute group loyalty is much more likely to lead you to a screaming Cthulhu horror than the pursuit of truth is. Especially if it comes from a combination of ritual and sleep deprivation.

Comment author: [deleted] 08 December 2010 05:21:16AM *  1 point [-]

Ok, worth thinking about.

I still want it. At times I really want victory, not just a normal life. Even though "normal" is all a person should really expect.

Comment author: shokwave 08 December 2010 05:53:42AM *  2 points [-]

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

Not to other-optimise, but yes.

As far as I can tell, the chances of encountering a true idea that is also a Lovecraftian cosmic horror is below the vanishing point for human brains. (There aren't neurons small enough to accurately reflect the tiny chances, etc)

It will also help you make money. Example: I received a promotion for demonstrating my ability to make more efficient rosters. This ability came from googling "scheduling problem" and looking at some common solutions, recognising that GRASP-type (page 7) solutions were effective and probably human-brain-computable - and then when I tried rostering, I intuitively implemented a pseduo-GRASP method.

That "intuitively implemented" bit is really important. You might not realise how much you rely on your intuition to decide for you, but it's a lot. It sounds like taking a lot of theory and jamming it into your intuition is the hard part for you.

Tangentially, how do you feel about the wisdom of age and the value of experience in making decisions?

Comment author: [deleted] 08 December 2010 12:58:48PM 1 point [-]

I think wisdom and experience are pretty good things -- not sure how that relates though.

And "screaming Cthulhu horror" was just a cute phrase -- I don't literally believe in Lovecraft. I just mean "if rationality results in extreme misery, I'll take a pass."

Comment author: shokwave 08 December 2010 03:10:45PM 0 points [-]

I think wisdom and experience are pretty good things -- not sure how that relates though.

Some people I have encountered struggle with my rationality because I often privilege general laws derived from decision theory and statistics over my own personal experience - like playing tit-for-tat when my gut is screaming defection rock, or participating in mutual fantasising about lottery wins but refusing to buy 'even one' lottery ticket. I have found that certain attitudes towards experience and age-wisdom can affect a person's ability to tag ideas with 'true in the real world' - that reason and logic can only achieve 'true but not actually applicable in the real world'. It was a possibility I thought I should check.

And "screaming Cthulhu horror" was just a cute phrase -- I don't literally believe in Lovecraft.

I assumed it was a reference to concepts like Roko's idea. As for regular extreme misery, yes, there is a case for rationality being negative. You would probably need some irrational beliefs (that you refuse to rationally examine) that prevent you from taking paths where rationality produces misery. You could probably get a half-decent picture of what paths these might be from questioning LessWrong about it, but that only reduces the chance - still a consideration.

Comment author: TheOtherDave 08 December 2010 02:31:49PM 2 points [-]

You talk about belief the way popular culture talks about love: as some kind of external influence that overcomes your resistance.

And belief can be like that, sure. But belief can also be the result of doing the necessary work.

I realize that's an uncomfortable idea. But it's also an important one.

Relatedly, my own thoughts on the value of truth: when the environment is very forgiving and even suboptimal choices mostly work out to my benefit, the cost of being incorrect a lot is mostly opportunity cost. That is, things go OK, and even get better sometimes. (Not as much better as they would have gotten had I optimized more, but still: better.)

I've spent most of my life in a forgiving environment, which makes it very easy to adopt the attitude that having accurate beliefs isn't particularly important. I can go through life giving up lots of opportunities, and if I just don't think too much about the improvements I'm giving up I'll still be relatively content. It's emotionally easy to discount possible future benefits.

Even if I do have transient moments of awareness of how much better it can be, I can suppress them by thinking about all the ways it can be worse and how much safer I am right where I am, as though refusing to climb somehow protected me from falling.

The thing is: when the environment is risky and most things cost me, the cost of being incorrect is loss. That is, things don't go OK, and they get worse. And I can't control the environment.

It's emotionally harder to discount possible future losses.

Comment author: [deleted] 08 December 2010 03:03:49PM *  2 points [-]

I was always under the impression that a sort of "work" can lead you to emotionally believe things that you already know to be true in principle. I suspect that a lot of practice in actually believing what you know will eventually cause the gap between knowing and believing to disappear. (Sort of the way that practice in reading eventually produces a person who can't look at a sentence without reading it.)

For example, I imagine that if you played some kind of betting game every day and made an effort to be realistic, you would stop expecting that wishing really hard for low-probability events could help you win. Your intuition/subconscious would eventually sync up with what you know to be true.

Comment author: TheOtherDave 08 December 2010 03:16:21PM 1 point [-]

(nods) That's been my experience.

Similarly: acting on the basis of what I believe, even if my emotions aren't fully aligned with those beliefs (for example, doing things I believe are valuable even if they scare me, or avoiding things I believe are risky even if they feel really enticing), can often cause my emotions to change over time.

But even if my emotions don't change, my beliefs and my behavior still do, and that has effects.

This is particularly relevant for beliefs that are strongly associated with things like group memberships, such as in the atheism example you mention.

Comment author: shokwave 08 December 2010 03:30:11PM 0 points [-]

I was always under the impression that a sort of "work" can lead you to emotionally believe things that you already know to be true in principle.

I strongly associate this with Eliezer's description of the brain as a cognitive engine, that needs to a certain amount of thermodynamical work to arrive at a certainty level - and that reasoned and logical conclusions that you 'know' fail to produce belief (enough certainty to act on knowledge) because they don't make your brain do enough work.

I imagine that forcing someone to deduce bits of probability math from earlier principles and observations, then have them use it to analyze betting games until they can generalise to concepts like expected value, would be enough work to have them believe probability theory.

Comment author: David_Gerard 19 January 2011 09:11:51AM -1 points [-]

Should I break down that barrier? I'm not sure. I'd do it if it would allow me to make money, I think. But not if it came at the cost of some kind of screaming Cthulhu horror.

This sounds like worrying about tripping over a conceptual basilisk. They really are remarkably rare unless your brain is actually dysfunctional or you've induced a susceptibility in yourself. Despite the popularity of the motif of harmful sensation in fiction, I know of pretty much no examples.

Comment author: MoreOn 12 December 2010 08:01:04PM 3 points [-]

Overestimating my driving skills is obviously bad. But how about this scenario of the possibility of happiness destroyed by the truth?

Suppose, on the final day of exams, on the last exam, you think you’ve done poorly. In fact, you only got 1 in 10 questions completely right. On the other 9, you hope you’d get at least a bit of partial credit. On the other hand, all 4 of your friends (in the class of 50) think they’ve done poorly. Maybe there will be a curve? In fact, if the final exam curve is good enough, you might even get an A for the course.

The grade goes online at 6 PM. It’s already there, and it won’t change.

So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.

Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).


My “solution” to this problem (probably irrational?) is in the spirit of “The other way is closed.” I look.

To maximize utility, I shouldn’t look at the grade until tomorrow morning. Some people don’t. I haven’t, once, and it didn’t bother me too much that I haven’t. And after bad grades, the outcome was usually pretty much as expected. So I know my utility function. That’s not the reason.

This is like the two-box decision of Newcomb’s problem. Rationally (according to Eliezer) you would pick one box. I’m not rational. I pick two. What’s there, is already there.

I. JUST. CAN’T. NOT. LOOK.

Comment author: Jordan 12 December 2010 08:14:34PM 2 points [-]

Sometimes I come up with an awesome idea for my research, something that seems like it will totally blow open the problem I've been working on for weeks/months/years. After having such amazing moments of insight I usually take a couple of days off because the potential that the idea is right just feels so good, and because, well, in research it usually turns out that most amazing insights don't solve that problem you've been working on for years.

Comment author: MoreOn 12 December 2010 09:41:35PM 0 points [-]

I know what you mean. I get that all the time, with all of the unsolved math problems I occasionally look at. And since my name isn't on wikipedia yet, I haven't solved any of them.

Although, in this case I would argue that we're better off knowing we're wrong, than being happy for the wrong reasons. The happiness at an end-of-semester party comes from a different source (socializing, having fun, etc), which are, dare I say, the "right" reasons. Destroying this happiness by the truth will not lead to the discovery of more truth, as it were (the grade is already there). Destroying the happiness over a mistake at least lets you find truth in acknowledging such mistake.

But then again, if I have a "brilliant" idea, I start working on it immediately, without giving myself much of a chance to bask in its brilliance.

Comment author: Desrtopa 12 December 2010 11:07:34PM *  1 point [-]

So what do you do? This is the last grade of the semester, and no more exams to study for. A bad grade will make you unhappy for the rest of the evening (you wanted to go to that party, right? You won’t have much fun thinking about that grade). A good grade will make you happy, but so what? Happiness comes with diminishing marginal returns (and for me it’s more like a binary value, happy or not). You have a higher expected utility for tonight if you don’t check your grade. And you’re not any worse off checking the grade tomorrow.

Should you destroy all that expected utility by the truth? (For reference, the truth is a that you got a C-, which is BAD).

I would think that an ideal rationalist's mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.

In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.

Comment author: JoshuaZ 12 December 2010 11:18:33PM 1 point [-]

In practice, I think that all but the most optimistic humans would tend to imagine a grade worse than they probably received until shown otherwise, so looking at the grade would tend to revise your happiness state upwards.

The Dunning-Kruger effect suggests that people on average will be too optimistic about grades.

Comment author: Desrtopa 12 December 2010 11:29:13PM 5 points [-]

Depending on their degree of competence. People who are actually competent tend to underestimate themselves. Perhaps I've simply developed an unrepresentative impression by associating more with people who are generally competent.

Comment author: MoreOn 13 December 2010 12:37:32AM *  1 point [-]

I would think that an ideal rationalist's mental state would be dependent on their prior determination of their most likely grade, and on average actually looking at it should not tend to revise that assessment upwards or downwards.

Suppose I estimate the probability of a good curve at roughly p=5/50=10%. If there's a curve, I'll get an A (utility value 4); else C- (utility value 1.7). Suppose then I need the minimum utility of 2 to enjoy the party (utility 0.2).

My expected utility from not checking the grade is 0.1 x 4 + 0.9 x 1.7 + 0.2 = 2.13. My actual utility once I'd checked the grade is 1.7 + 0.2 = 1.9.

If this expected utility estimate is good, then I should be happy in proportion to it (although I might as well acknowledge now that I failed to account for the difference between expected utility and the utility of the expected outcome, thus assuming that I'm risk-neutral).

Comment author: Desrtopa 13 December 2010 01:07:55AM *  0 points [-]

Rather than there being a discrete point above which you will be able to enjoy the party and below which you will not, I would expect the amount you enjoy the party to vary according to the grade you got, unless the cutoff point is due to some additional consequence of scoring below that grade which will be accompanied by an additional utility hit. Your prior expected utility would incorporate the chance of taking that additional hit times the likelihood of it occurring.

Anyway, in any specific case, your utility may go up or down by checking your grade, but if you have a perfectly accurate assessment of the probability distribution for your grade, then on average your expected utility should be the same whether you check or not.

In this case, the fact that we know the actual grade stands to be misleading, since it's liable to make any probability distribution that doesn't provide an average expected grade of 1.7 look wrong, even though that might not be the average predicted by the available data.

Comment author: MoreOn 14 December 2010 05:14:06AM 0 points [-]

I considered your point at length. To address your comment, I could use the ignorance hypothesis on my old model, assigning equal probability values to everything between 1.7 and 4.0. Discrete if need be. I could use a binary output value as "enjoying the party," 1 or 0. I could do lots of other tweaks.

But the problem here is, everything comes down to whether this model (or any other 5-minute model) is good enough to explain my non-rationalist gut feeling, especially without an experiment. And, you know, I'm not about to fail an easy exam in a couple of days just to see what my utility function would do.

Comment author: Desrtopa 14 December 2010 06:58:17AM 1 point [-]

Conservation of expected evidence means that ideally, you can't expect the introduction of new evidence to affect your expected utility. In practice, that's probably not the case, but humans aren't even rough approximations of ideal rationalists.

Comment author: wnoise 13 December 2010 06:46:04AM 2 points [-]

I would be happier knowing the grade is bad, rather than not knowing at all. Knowing leaves me free to enjoy the party, rather than worry about it and be distracted at the party.

Comment author: buybuydandavis 29 October 2011 08:05:24AM *  0 points [-]

This is the peculiar blindness of rationalists. Everywhere you look, you can see people denying reality, and yet rationalists talk like it can't be done.

Winston, after being tortured, eventually could see 5 fingers where there were only 4. Most people are much more malleable than that. They already had a preference for believing what they're told to believe. You can see it everywhere you look.

Even if you ignore the daily evidence of your senses, just as a matter of the evolutionary pressure of centuries of ideological terror and executions, shouldn't we expect independents minds to have gone the way of the dodo?

Is it impossible for us to change our spots? I don't know. Maybe. Maybe not. Maybe we just aren't rational enough yet, still fetishistically clinging to our mania for epistemic rationality, ignoring the tradeoffs we make to instrumental rationality, which is where the rubber meets the road.

Second order rationality implies...

No it doesn't. As long as one includes instrumental rationality in the mix, it implies nothing of the sort. Instrumental rationality is what achieves your values. If you're really committed to winning, not just toeing the line on epistemic rationality, you can and will probably change. Your mind can calculate a lot more than what you can bat around in your head self consciously.

Winston's heart sank. That was doublethink. He had a feeling of deadly helplessness. If he could have been certain that O'Brien was lying, it would not have seemed to matter. But it was perfectly possible that O'Brien had really forgotten the photograph. And if so, then already he would have forgotten his denial of remembering it, and forgotten the act of forgetting. How could one be sure that it was simple trickery? Perhaps that lunatic dislocation in the mind could really happen: that was the thought that defeated him.

That isn't the thought that defeats me, nor is it the thought that defeats Orwell. The horror is that Double Think may win, and may already be winning.

I find it grotesque, but the universes seems blithely unconcerned about my preferences.

Comment author: Acidmind 20 August 2012 10:38:24AM 1 point [-]

Quite the contrary: Alcohol.

Comment author: ancientcampus 21 August 2012 11:08:34PM 0 points [-]

I want to upvote this thing so hard.

Comment author: chaosmosis 24 August 2012 10:51:10PM 0 points [-]

Since this is the featured article thingy of today I'm commenting, maybe someone will see this and want to engage this argument or agree and make sure that the smart guys in lab coats see it.

By the time you realize you have a choice, there is no choice. You cannot unsee what you see. The other way is closed.

For now. But once we control neuroscience really well, this entire can of worms gets opened up again. Perhaps Brave New World would be a more appropriate dystopia to reference than 1984, because in that world they actually DO believe what the government wants them to, because they're so well controlled by the sleep hypnosis and conditioning they receive.

So we'll need a different and better solution eventually. And, also, this solution needs to deal with infinite regress too.

Comment author: Origin64 18 December 2012 08:29:36PM 0 points [-]

"You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived."

As far as I've got this happiness thing figured out, you're it when you believe you are, and you're not when you believe you're not. There is, in fact, not necessarily a correlation between how happy a person should be, and how happy they feel. Feelings don't have to correspond to reality. One can consciously choose to no longer be bothered by something and just be happy instead. For me at least, with a little effort, it works. And it can be validated just as easily, just by stating that my sole utility function is to be happy. The human brain is a lotus-eater machine.

Comment author: James_Miller 20 December 2012 06:38:31PM 1 point [-]

The 30-year-old me who was terrified of death would have given belief in an afterlife as an exception to this. The 45-year-old me is a member of cryonics provider Alcor.

Comment author: Ritalin 31 May 2013 01:00:20AM 1 point [-]

Well, when you have "Homosexuals in the Basement", and the Nazi officer rings at your door, you had better make yourself believe you don't have them. What is, precisely, the difference between this deep-immersion roleplay, and genuine self-delusion, for all practical purposes? This is not a rhethorical question.

Comment author: Viliam_Bur 02 September 2013 02:31:00PM 3 points [-]

Make yourself believe strongly enough that you will invite the officer to check your basement?

Comment author: Ritalin 06 September 2013 07:24:13AM 0 points [-]

Just to be practical, it is better to make yourself believe that you have something embarassing in your basement, such as, say, a (straight) porn stash or undeclared valuables or any other embarassing property that would make the officer sigh and roll his eyes on account of having bigger fish to fry.

Comment author: Kawoomba 06 August 2013 06:37:40PM 1 point [-]

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen. Second-order rationality implies that at some point, you will think to yourself, "And now, I will irrationally believe that I will win the lottery, in order to make myself happy." But we do not have such direct control over our beliefs.

We routinely generate a swath of irrational beliefs, spawned e.g. by deep seated biological biases such as "That girl I just met, she is so special, I will love and cherish her forever and ever." You notice a belief like that makes you happy, then you only do a cursory examination of some worst case boundaries. If you then judge the belief to be mostly harmless, you just do not look at it any closer.

Changing a belief consciously means reflecting on it. Stop the reflecting, and you stop the updating (and keep the happiness).

Comment author: army1987 13 August 2013 11:47:03PM 0 points [-]
Comment author: SeanMCoincon 31 July 2014 07:11:49PM 0 points [-]

This immediately brings to mind the old adage about it being better to be Socrates dissatisfied than a pig satisfied. I'd imagine, from the pig's point of view, that the loftiest height of piggy happiness was not terribly dissimilar from the baseline level of piggy contentment, so equating "happiness" to "contentment" would not be an inexcusable breach of piggy logic. Indeed, we humans pretty much have to infer this state of affairs when considering animal wellbeing ("appearance of sociobiological contentment approximates happiness"), as we don't yet possess any means of engaging animals in philosophical conversation on the subject.

Yet it seems that those who would have us believe that "blissful ignorance" is a good thing as an absolute are confusing contentment with happiness unnecessarily. Happiness registers more as a positive, aspirational value within the context of the human experience range; contentment seems more a negative, absence-of-dissatisfaction value that indicates only that things aren't going poorly. Doublethink and willful ignorance do not seem to be able to positively provide qualia that contribute to happiness; they can only obscure knowledge of things that are actually going poorly, thus creating a false sense of contentment.

That's my general counterpoint whenever people speak positively of the "happiness" created by things like religion and opiates. Nothing is being added; your knowledge of reality is being obscured. It's difficult to see how that approach could be considered a mature option.