Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Caravelle 24 July 2011 10:32:44PM *  21 points [-]

This.

I don't know if latent homosexuality in homophobes is the best example, but I've definitely seen it in myself. I will sometimes behave in certain ways, for motives I find perfectly virtuous or justified, and it is only by analysing my behaviour post-hoc that I realize it isn't consistent with the motives I thought I had - but it is consistent with much more selfish motives.

I think the example that most shocked me was back when I played an online RPG, and organised an action in a newly-coded environment. I and others on my team noticed an unexpected consequence of the rules that would make it easy for us to win. Awesome ! We built our strategy around it, proud of our cleverness, and went forward with the action.

And down came the administrators, furious that we had cheated that way.

I was INCENSED at the accusation. How were we supposed to know this was a bug and not a feature ? How dare they presume bad faith on our part ? I loudly and vocally defended our actions.

It's only later, as I was re-reading our posts on the private forum where we organised the action (posts that I realized as I re-read them the administrators had access to, and had probably read... please kill me now), that I noticed that not only did we discuss said bug, I specifically told everyone not to tell the administrators about it. At the time, my reasoning was that, well, they might decide to tell us not to use it, and we wouldn't want that, right ?

But if I'd thought there was a chance that the administrators would disapprove of us using the bug, how could I possibly think it wasn't a bug, and that using it wasn't cheating ? If I was acting in good faith how could I possibly not want to check with the administrators and make sure ?

Well, I didn't. I managed to cheat, obviously, blatantly, and have no conscious awareness I was doing so. That's not even quite true; I bet if I'd thought it through, as I did afterwards, I would have realized it. But my subconscious was damn well not going to let me think it through now was it ?

And why would my subconscious not allow me to understand I was cheating ? Well, the answer is obvious : so that I could be INCENSED and defend myself vocally, passionately and with utter sincerity once I did get accused of cheating. Heck, I probably did get away with it in some people's eyes. Those that didn't read the incriminating posts on the private forum at least.

So basically, now I don't take my motives for granted. I try to consider not only why I think I want to do something, but what motives one could infer from the actual consequences of what I want to do.

It also means I worry much less about other people's motives. If motives are a perfect guide to people's actions, then someone who thinks they truly love their partner while their actions result in abuse might just be an unfortunate klutz with anger issues, who should be pitied and given second chances instead of dumped. But if the subconscious can have selfish motives and cloak them in virtue for the benefit of the conscious mind, then that person can have the best intentions and still be an abuser, and one should very much DTMFA.

Comment author: AbyCodes 24 July 2011 07:19:52AM *  1 point [-]

quoted text if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper.

Won't it be the case that someone who tries to escape from a burning building, does so, just to avoid the pain and suffering it inflicts? It would be such a drag to be burned alive rather than a peaceful painless poison death.

Comment author: Caravelle 24 July 2011 09:49:49PM 4 points [-]

That doesn't help much. If people were told they were going to be murdered in a painless way (or something not particularly painful - for example, a shot for someone who isn't afraid of needles and has no problem getting vaccinated) most would consider this a threat and would try to avoid it.

I think most people's practical attitude towards death is a bit like Syrio Forel from Game of Thrones - "not today". We learn to accept that we'll die someday, we might even be okay with it, but we prefer to have it happen as far in the future as we can manage.

Signing up for cryonics is an attempt to avoid dying tomorrow - but we're not that worried about dying tomorrow. Getting out of a burning building means we avoid dying today.

(whether this is a refinement of how to understand our behaviour around death, or a potential generalized utility function, I couldn't say).

Comment author: hairyfigment 24 July 2011 08:12:38PM *  0 points [-]

Well, I was trying to think of a general rule that L&J could follow in order to repair their worldview. (Obviously we should consider following this rule as well, if we can find one.) I came up with, 'Ask how all the facts, as you see them, fit together.'

We could probably find a better version. Eliezer suggests the rule, 'Try to find the thought that hurts the most,' or loosely paraphrased, 'Ask what the Sorting Hat from Methods of Rationality would tell you.' But for L&J such an imaginary conversation would likely end with the phrase, 'Get behind me, Satan!' They do not expect to have the brains to see how their facts fit together, considering this the province of God. And yet I feel like they must have some experience with judging beliefs successfully. Surely they can't have formed a publishing empire, and survived to the point where they could hire assistants, solely by doing what 'the proper God-appointed authorities' told them to do? (Though they do lump airplane pilots and scientists together as 'practical authorities.' And we know that more belief in the wisdom of 'traditional authority' correlates with more gullibility about any claim the authorities in question have not condemned -- see Chapter 3 here. Hmm.)

So I want to say that a personalized version of my rule would have a better effect than imagining criticisms directly. Perhaps we could tell the authors to imagine in detail what they would see if (or when) God shows them how all their facts fit together, and exactly how this would allow them to answer any and all objections. This seems connected with the act of imagining a paradise you would actually prefer to Reedspacer's Lower Bound as a long-term home. Both involve the rule, 'Ask how it would actually work given what you know about people/yourself.'

You might think that authors who wrote about the kingdom Jesus will establish on Earth wouldn't need to hear these rules. You'd be wrong. :-)

Comment author: Caravelle 24 July 2011 08:49:06PM 2 points [-]

"Surely they can't have formed a publishing empire, and survived to the point where they could hire assistants, solely by doing what 'the proper God-appointed authorities' told them to do? "

Dunno; I wouldn't underestimate to what extent plain instinct can make one behave in a rational-like manner even though one's beliefs aren't rational. How those instincts are rationalized post-hoc, if they're rationalized at all, isn't that relevant.

"Perhaps we could tell the authors to imagine in detail what they would see if (or when) God shows them how all their facts fit together, and exactly how this would allow them to answer any and all objections."

I would agree with Eliezer's rule more than with yours here. For one thing, the issue isn't so much that L&J aren't following the right rationality rules; I suspect they don't want to follow the right rationality rules. I don't know if they haven't realized they have to follow them to be right, or don't care that much about being right (or to be more accurate, they're sufficiently married to their current worldview that they don't even want to consider it might be wrong), but I'm pretty sure if someone suggested they follow either your rule or Eliezer's they'd just stare blankly.

So there's that. But if we assume we managed to get them to listen to what we say, then I think Eliezer's rule would work much better, because it's much harder to misuse. "Ask yourself how things would actually work" is prone to rationalization, I can just picture the sentence getting translated by some brain module as "picture your current model. Nice, innit ?".

Or, put another way, I think that the part of the brain that actually examines one's beliefs, and the part of the brain that gives you the warm glow of self-satisfaction from being right, are not the same part of the brain. Your question will get intercepted by the warm glow part of the brain.. Eliezer's question... will not. In fact it looks specifically designed to avoid it.

In particular, if "try to find the thought that hurts you the most" would elicit "get behind me Satan", I'm not convinced that "try and work out how your worldview would actually work" wouldn't have the same results. Satan is the Great Deceiver after all. How easy would it be to assume, once you meet the first contradiction, that Satan is clouding your thoughts...

Comment author: Raemon 18 July 2011 11:58:57PM 0 points [-]

I've been meaning to try it out, since the potential +happiness and +goodness seemed like an obvious win, but I keep putting it off. My excuse this week is I have the flu.

Comment author: Caravelle 24 July 2011 08:27:22PM 1 point [-]

I've never worked in a soup kitchen (although I should, because I think I might enjoy it) but I've found that often when I voluntarily engage in a social and purely beneficial activity I enjoy myself enormously. There's a kind of comraderie going on, it's like the pleasure of social interaction is combining with the pleasure of Helping in just the right ways.

I don't expect it would work all the time, or for everyone. And I usually feel differently when I'm forced to do something instead of volunteering. Still, it could be a factor in why some people enjoy that sort of thing.

Comment author: Caravelle 24 July 2011 07:25:21PM 8 points [-]

I have a question. This article suggests that for a given utility function there is one single charity that is best and that's the one one should give money to. That looks a bit problematic to me - for example, if everyone invests in malaria nets because that's the single one that saves most lives, then nobody is investing in any other kind of charity, but shouldn't those things get done too ?

We can get around this by considering that the efficiency function varies with time - for example, once everybody gives their money to buy nets the marginal cost of each saved life increases, until some other charity becomes best and all charitable giving switches to that one.

But we don't have a complete and up-to-the-second knowledge of how many lives each marginal dollar will save in every charity, all we have to work with is approximations. In that situation, wouldn't it be best to have a basket of charities one gives to, with more money going to those that save the most lives but not putting all the money on a single charity ?

Or is this consideration completely and utterly pointless in a world where most people do NOT act like this, and most people don't give enough money to change the game, so rational actors who don't have millions of dollars to give to charity should always give to the one that saves the most lives per dollar anyway ?

Comment author: Caravelle 24 July 2011 06:21:19PM 5 points [-]

Hello all !

I'm a twenty-seven years old student doing a PhD in vegetation dynamics. I've been interested in science since forever, in skepticism and rationality per se for the last few years, and I was linked to LessWrong only a few months ago and was blown away. I'm frankly disconcerted by how every single internet argument I've gotten into since has involved invoking rationality and using various bits of LessWrong vocabulary, I think the last time I absorbed a worldview that fast was from reading "How the Mind Works", lo these many years ago. So I look forward to seeing how that pans out (no, I do not think I'm being a mindless sheep - I don't agree with everything Steven Pinker said either. I'm just in the honeymoon "it all makes SENSE !" phase).

I've got to say, I'm really grateful for this great resource and to the internet for giving me access to it. Next time an old geezer tells me about how awesome the 50s and 60s were I'll bonk them over the head. Metaphorically.

What do I value ? 1) being right and 2) being good, in no particular order. I'm afraid I'm much better at the first one than the second, but reading posts here has gotten me to think a bit on how to integrate both.

Comment author: hairyfigment 20 July 2011 02:33:40AM 1 point [-]

Welcome to Less Wrong! Do you also post at Slacktivist, where Fred dissects the Left Behind series?

If so, you might appreciate this: I think the part about integrating knowledge into your view of the world gets at the root problem with Left Behind. If we assume those books represent the authors' views in some way, then the authors seem to have no concept of facts fitting together. So for example it doesn't matter that Jesus lies about the characters' actions, and their effect on his own, while using mind control to make those characters give the "right" responses to his canned speech. It only matters that the words come from the Bible. This makes the biblical "prediction" technically true. The Bible itself is reliable because of biblical inerrancy, and for no other reason -- God is not always honest in other contexts.

Comment author: Caravelle 24 July 2011 05:53:02PM 0 points [-]

Hi ! Yep, it's the same me, thanks for the welcome !

I don't know if I'd call integrating knowledge THE root problem of Left Behind, which has many root problems, and a lack of integration strikes me as too high-level and widespread among humans to qualify as [i]root[/i] per se...

But yeah, good illustration of the principle :-)

(and thanks for the welcome link, I'd somehow missed that page)

Comment author: AdeleneDawner 13 May 2011 02:25:22AM 9 points [-]

For example:

Say I live in a bad neighborhood, but I'm kind of clueless and don't really want to believe it. I hear gunshots sometimes, but rationalize that it must just be cars backfiring. I hear my neighbors fighting, but tell myself it must be a TV program that someone has on really loud. I see people hanging around outside, selling who-knows-what, but tell myself that it must just be the local culture, and it's not my place to say that other people can't spend time outside, that's just silly.

The probability of the police breaking my door down because someone taking anonymous tips about drug activity misheard an apartment number is not any better in that situation than in the one where I admit to what's going on; my beliefs don't change the police's behavior. And in the situation where I acknowledge what's going on, I can do something about it, like finding somewhere else to live.

Acknowledging it is less comfortable - being afraid of one's neighbors is not fun, and the first situation avoids that - but feeling less fear doesn't mean there's actually less danger.

Comment author: Caravelle 24 July 2011 04:30:37PM *  5 points [-]

I can see the objection there however, partly because I sort of have this issue. I've never been attacked, or mugged, or generally made to feel genuinely unsafe - those few incidents that have unsettled me have affected me far less than the social pressure I've felt to feel unsafe - people telling me "are you sure you want to walk home alone ?", or "don't forget to lock the door at all times !".

I fight against that social pressure. I don't WANT the limitations and stress that come with being afraid, and the lower opinion it implies I should have of the world around me. I value my lack of fear quite highly, overall.

That said, is it really to my advantage to have a false sense of security ? Obviously not. I don't want to be assaulted or hurt or robbed. If the world really is a dangerous place there is no virtue in pretending it isn't.

What I should to is work to separate my knowledge from my actions. If I really want to go home alone, I can do this without fooling myself about how risk-free it is; I can choose instead to value the additional freedom I get from going over the additional safety I'd get from not going. And if I find I don't value my freedom that highly after all, then I should change my behaviour with no regrets. And if I'm afraid that thinking my neighbourhood is unsafe will lead me to be a meaner person overall, well, I don't have to let it. If being a kind person is worth doing at all, it's worth doing in a dangerous world.

(this has the additional advantage that if I do this correctly, actually getting mugged might not change my behaviour as radically as it would if I were doing all that stuff out of a false sense of security)

Of course the truth is that it isn't that simple: our brain being what it is, we cannot completely control the way we are shaped by our beliefs. As earlier commenters have pointed out, while admitting you're gay won't affect the fact that you are gay, and it doesn't imply you should worsen your situation by telling your homophobic friends that you're gay, our brains happen to be not that good at living a sustained lie, so in practice it probably will force you to change your behaviour.

Still, I don't think this makes the litany useless. I think that it is possible when we analyse our beliefs, to not only figure out how true they are but also to figure out the extent to which changing them would really force us to change our behaviour. It probably won't lead to a situation where we choose to adopt a false belief - the concept strikes me as rather contradictory - but at the end of the exercise we'd know better which behaviours we really value, and we might figure ways to hold on to them even as our beliefs change.

Comment author: Caravelle 22 June 2011 09:49:57PM 3 points [-]

When I was a kid I had this book called "Thinking Physics", which was basically a book of multiple choice physics questions (such as "an elephant and a feather are falling, which one experiences more air resistance ?", or "Kepler and Galileo made telescopes around the same time and Kepler's was adopted widely, why ?") aimed to point out where our natural instincts or presuppositions go against how physics actually work, and explaining, well, how physics actually work.

Really, the simple idea that physics are a habit of thought that have to be worked on because our defaults are incorrect (or, as I realized much later, are correct only in the special case of the everyday life of a social hominid) has been helpful to me ever since, and too few people have it or realize it's important.

I think it gets to what you're saying : one shouldn't learn physics (or anything for that matter) as a list of facts or methods to apply in the classroom, one should work to integrate them into one's mental model of the world. Which is not as easy as it sounds.

Comment author: Caravelle 22 June 2011 02:35:20PM 0 points [-]

I've been thinking of this question lately, and while I agree with the main thrust of your article, I don't think that giving all possible objections is always possible (it can get really long, and sometimes there are thematic issues). Which is why I think multiple people responding tends to be a good thing.

But more to the point, I don't think I agree that RA is moving the goalposts. Because really, every position has many arguments pro or con where even if just one is demolished the position can survive off the others. I think the arguing technique that really is problematic is abandoning position A to go to position B while still taking A as true, thus continuing to make arguments based on A or going back to asserting A once B doesn't work out.

I think that if someone explicitly concedes A before going on to B, and doesn't go back to A afterwards (unless they've got new arguments of course) they aren't doing anything wrong.

View more: Next