Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to A Rationalist's Tale
Comment author: Manfred 10 September 2011 01:47:16AM 20 points [-]

Are there also lots of "undramatic" people like me? Every one of these personal stories I see involves sadness, epiphany, that sort of thing, but is that publication bias or am I unusual?

Comment author: Virge 29 September 2011 01:52:36PM 0 points [-]

Undramatic for me too.

If you've got a talent that keeps you very popular within a group, it's very easy to get sucked into being what those admiring people want you to be. Being bright, clear-thinking, eloquent, confident (and a musician) moves you very easily into a leadership position, and builds the feeling of responsibility for the welfare of the group.

It took me too long to commit to acknowledging my accumulated doubts and misgivings and examine them in anything other than a pro-Christian light. I had enough religious cached thoughts in an interconnected self-supporting web that doubting any one of them was discouraged by the support of the others. However, I was spending more of my time aware of the dissonance between what I knew and what I believed (or, as I later realised, what I was telling myself I believed).

I ended up deciding to spend a few months of my non-work time examining my faith in detail -- clearing the cache, and trying to understand what it was that made me hold on to what I thought I believed. During that time I gradually dropped out of church activities.

I look back on the time and see it as a process of becoming more honest with myself. Had I tried to determine what I really believed by looking at what I anticipated and how that influenced my behaviour, I'd have realised a lot earlier that my true beliefs were non-supernatural. I'd just been playing an expected role in a supportive family and social group, and I'd adjusted my thinking to blend into that role.

Comment author: Patrick 21 September 2010 07:16:59AM 0 points [-]

Good suggestion. Sorry Virge, I'm gonna have to stick with the 2nd, hopefully you can come to the next one.

Comment author: Virge 23 September 2010 02:51:24PM 0 points [-]

Thanks Patrick. As it looks like turning out, I think my 3rd is going to be completely taken up anyway. Maybe next time.

Comment author: Virge 20 September 2010 10:53:50AM 1 point [-]

Apologies from me. My October 2nd is already booked for another party. (Not that I attend a lot of parties.)

Comment author: Virge 19 April 2010 01:51:33AM 5 points [-]

Hi. I was an occasional contributor on OB and have posted a few comments on LW. I've dropped back to lurking for about a year now. I find most of the posts stimulating -- some stimulating enough to make me want to comment -- but my recent habit of catching up in bursts means that the conversations are often several weeks old and a lot of what needs to be argued about them has already been said.

The last post that almost prompted me to comment was ata's mathematical universe / map=territory post. It forced me to think for some time about the reification of mathematical subjunctives and how similar that was to common confusions about 'couldness'. I decided I didn't have the time and energy to revive the discussion and to refine my ideas with sufficient rigor to make it worth everyone's attention.

Over the past week I've worked through my backlog of LW reading, so I've removed my "old conversation" excuse for not commenting. I'll still be mostly a lurker.

Comment author: mattnewport 18 April 2009 06:45:24PM *  0 points [-]

If you choose to reject any system that doesn't provide a "unique 'right' answer" then you're going to reject every system so far devised.

It seems to me that utilitarianism is trying to answer the wrong question. I don't think there's anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is 'right' but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.

On my view of morality it's accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is 'right'.

Have you read Greene's The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?

I haven't, but I've seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it's aimed at someone who still holds the naive view of morality that it's about doing what is 'right'.

Comment author: Virge 19 April 2009 03:07:34AM 1 point [-]

Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.

I think we're in agreement here.

For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal "what should I be doing with my time and energy at this moment?" to the public "what should person A be permitted/obliged to do?"

I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I've grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).

Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.

I've been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I've been erecting an extreme utilitarian strawman. I think I have, and I'm seeing a glimmer of a solution to the confusion.

Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone's emotional satisfaction from relationship activities.

(I feel less confused now. I'll have to think about this some more.)

Comment author: mattnewport 18 April 2009 06:19:44AM 0 points [-]

Didn't you just suggest that we don't have to value the entirety of a murderer's utility function? There are certainly similarities between individual's utility functions but they are not identical. That still doesn't address the differential weighting issue either. It's fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique 'right' answer in the face of any ethical dilemma and so seems to me to be of limited value.

Comment author: Virge 18 April 2009 01:27:28PM 0 points [-]

It's fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique 'right' answer in the face of any ethical dilemma and so seems to me to be of limited value.

If you choose to reject any system that doesn't provide a "unique 'right' answer" then you're going to reject every system so far devised. Have you read Greene's The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?

However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we're left with:

  • Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved.

  • Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint.

You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it'll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.

Comment author: mattnewport 17 April 2009 09:35:53PM 0 points [-]

My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people's utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.

Comment author: Virge 18 April 2009 04:30:05AM 3 points [-]

Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection.

Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don't (in general) get weightings that will match your intuitions.

I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don't know at all.

In reality, the money I spend on my two cats could be spent improving the happiness of many humans - humans that I don't know at all who are living a long way away from me. Clearly I don't apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.

Comment author: Virge 16 April 2009 02:30:03PM 4 points [-]

I've noticed strong female representation (where I least expected to find it) in The Skeptic Zone,an Australian skeptics group. The feeling I get of that community (even just as a podcast lurker) is that it's much more lighthearted than LW/OB. Whether that makes any difference to sex ratios, I don't know.

For most of the time I've listened to the podcast, there's been regular strong contributions from females. My gut feel would have been that having good female role models would encourage more female participation, however I just did a quick eyeballing of the Skeptic Zone's FaceBook fans and it looks typically about 5:1 biased to males.

In response to Where are we?
Comment author: michaelhoney 03 April 2009 03:26:39AM 0 points [-]

Canberra, Australia.

In response to comment by michaelhoney on Where are we?
Comment author: Virge 04 April 2009 01:38:54PM 0 points [-]

Melbourne, Australia

In response to Cached Selves
Comment author: AnnaSalamon 23 March 2009 02:34:50AM *  6 points [-]

Could more people please share data on how one of the above techniques, or some other technique for reducing consistency pressures, has actually helped their rationality? Or how such a technique has harmed their rationality, or has just been a waste of time? The techniques list is just a list of guesses, and while I'm planning on using more of them than I have been using... it would be nice to have even anecdotal data on what helps and doesn't help.

For example, many of you write anonymously; what effects do you notice from doing so?

Or what thoughts do you have regarding Michael Vassar's suggestion to practice lying?

In response to comment by AnnaSalamon on Cached Selves
Comment author: Virge 24 March 2009 01:27:25AM *  7 points [-]

Or what thoughts do you have regarding Michael Vassar's suggestion to practice lying?

(Reusing an old joke) Q: What's the difference between a creationist preacher and a rationalist? A: The rationalist knows when he's lying.

I'm having trouble resolving 2a and 3b.

2a. Hyper-vigilant honesty. Take care never to say anything but what is best supported by the evidence, aloud or to yourself, lest you come to believe it. 3b. Build emotional comfort with lying, so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience. Perhaps follow Michael Vassar’s suggestion to lie on purpose in some unimportant contexts.

I find myself rejecting 3b as a useful practice because:

  • What I think will be an unimportant and undetectable lie has a finite probability of being detected and considered important by someone whose confidence I value. See Entangled Truths, Contagious Lies

  • This post points out the dangers of self-delusion from motivated small lies e.g. "if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself." Is there any evidence to show that I'll be safer from my own lies if I deliberately tag them at the time I tell them?

  • Building rationalism as a movement to improve humanity doesn't need to be encumbered by accusations that the movement encourages dishonesty. Even though one might justify the practice of telling unimportant lies as a means to prevent a larger more problematic bias, advocating lies at any level is begging to be quote-mined and portrayed as fundamentally immoral.

  • The justification for 3b ("so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience.") doesn't work for me. I don't know if I'm different, but I find that I have far more respect for people (particularly politicians) who admit they were wrong.

Rather than practising being emotionally comfortable lying, I'd rather practise being comfortable with acknowledging fallibility.

View more: Next