Eliezer_Yudkowsky comments on Rationality Quotes June 2013 - Less Wrong

3 Post author: Thomas 03 June 2013 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (778)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 02 June 2013 07:33:17PM 11 points [-]

Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught 'rationality' by altruists.)

Comment author: syllogism 04 June 2013 07:04:39PM *  6 points [-]

That could be because rationality decreases the effectiveness of distress minimisation techniques other than altruism.

Comment author: Baughn 05 June 2013 12:42:40AM *  3 points [-]

..because it makes you try to see reality as it is?

In me, it's also had the effect of reducing empathy. (Helps me not go crazy.)

Comment author: syllogism 05 June 2013 09:41:12AM 2 points [-]

Well, for me, believing myself to be a type of person I don't like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.

For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn't satisfied with it. I now follow a utilitarian ethics that's much more materially expensive.

Comment author: RichardKennaway 05 June 2013 09:56:43AM 3 points [-]

Are they being taught 'rationality' by altruists or 'altruism' by rationalists? Or 'rational altruism' by rational altruists?

Comment author: pinyaka 03 June 2013 02:03:58PM 1 point [-]

Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren't very effective might be pretty distressing.

Comment author: Eliezer_Yudkowsky 03 June 2013 06:52:56PM 0 points [-]

That seems to verge on the trivializing gambit, though.

Comment author: pinyaka 03 June 2013 10:55:29PM 0 points [-]

I guess I don't see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?

Comment author: Psy-Kosh 06 June 2013 12:31:55AM 0 points [-]

Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?

Comment author: pinyaka 06 June 2013 06:11:24PM 1 point [-]

I meant that everyone I've discussed the subject with believes that self-interest exists as a motivating force, so maybe "additional" would have been a better descriptor than "new."

Comment author: Psy-Kosh 07 June 2013 03:48:56PM 1 point [-]

Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.

Comment author: pinyaka 08 June 2013 01:14:22AM 0 points [-]

Well, the trivializing gambit here would be to just say that "caring about another person" just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.

I'm not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you're looking for information not to literally feel the same thing at the same intensity - when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).

It's worth noting (for me) that this doesn't diminish the importance of empathy and it doesn't mean that I don't really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can't really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you're trying to do something specific with your caring circuits (or trying to figure out how to emulate them).

Comment author: Locaha 05 June 2013 12:26:31PM 0 points [-]

Shouldn't the methods of rationality be orthogonal to the goal you are trying to achieve?