Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Fake Selfishness

23 Post author: Eliezer_Yudkowsky 08 November 2007 02:31AM

Followup to:  Fake Justification 

Once upon a time, I met someone who proclaimed himself to be purely selfish, and told me that I should be purely selfish as well.  I was feeling mischievous(*) that day, so I said, "I've observed that with most religious people, at least the ones I meet, it doesn't matter much what their religion says, because whatever they want to do, they can find a religious reason for it.  Their religion says they should stone unbelievers, but they want to be nice to people, so they find a religious justification for that instead.  It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior, because whenever they want to be nice to people, they can rationalize it in selfish terms."

And the one said, "I don't think that's true."

I said, "If you're genuinely selfish, then why do you want me to be selfish too?  Doesn't that make you concerned for my welfare?  Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

The one replied:  "Well, if you become selfish, then you'll realize that it's in your rational self-interest to play a productive role in the economy, instead of, for example, passing laws that infringe on my private property."

And I said, "But I'm a small-L libertarian already, so I'm not going to support those laws.  And since I conceive of myself as an altruist, I've taken a job that I expect to benefit a lot of people, including you, instead of a job that pays more.  Would you really benefit more from me if I became selfish?  Besides, is trying to persuade me to be selfish the most selfish thing you could be doing?  Aren't there other things you could do with your time that would bring much more direct benefits?  But what I really want to know is this:  Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do?  Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"

And the one said, "You may be right about that last part," so I marked him down as intelligent.

(*)  Other mischievous questions to ask self-proclaimed Selfishes:   "Would you sacrifice your own life to save the entire human species?"  (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.)  Or, taking into account that scope insensitivity leads many people to be more concerned over one life than the Earth, "If you had to choose one event or the other, would you rather that you stubbed your toe, or that the stranger standing near the wall there gets horribly tortured for fifty years?"  (If they say that they'd be emotionally disturbed by knowing, specify that they won't know about the torture.)  "Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?"  (Selfish libertarians only.)

Comments (59)

Sort By: Old
Comment author: Constant2 08 November 2007 03:01:51AM 2 points [-]

Have you read Mark Twain's "What is Man"? If I recall correctly, there he lays out his argument that man is already always selfish. For example, we do good deeds ultimately for our own comfort, because they make us feel good. (If I recall rightly, he also makes the, to me rather more interesting, point that people are born happy or sad rather than are made happy or sad by specific, supposedly uplifting or depressing, thoughts. That seems to anticipate the modern pharmaceutical approach to mood regulation.)

For my part, I think that "selfish" must describe a proper subset - not too small, and not too large - of human actions, in order to be a meaningful word. If, as Mark Twain claims, everything we do is selfish, then the word is useless and meaningless. While I acknowledge that practically everything done to benefit others without a clear quid pro quo in mind does indeed seem to have the effect of giving the doer spiritual comfort if nothing else, and may ultimately be done on that account (something we might test by observing a brain damaged patient who has lost the ability to feel that comfort), I would still call those actions "unselfish", simply because these sorts of actions are the paradigms, the prototypes, the models, the patterns, the exemplars, the teaching and defining examples, of "unselfishness". A selfish action necessarily involves a degree of unconcern about others. If there is sufficient concern for others, then the action is no longer selfish even if Mark Twain's psychological analysis of that concern (in terms of felt "spiritual comfort") is correct.

Comment author: Anonymous13 08 November 2007 03:17:05AM -2 points [-]

Did Hopefully Anonymous figure this out and stop expending effort commenting or posting on his anonymous blog?

Comment author: Nominull2 08 November 2007 04:15:55AM -1 points [-]

So, it seems that Eliezer's working definition of an intelligent person is "someone who agrees with me".

Comment author: Adirian 08 November 2007 04:56:04AM 4 points [-]

I must point out that "whenever they want to be nice to someone" entails a desire to be nice to someone. Your very phrase defines it as being in their interests to be nice to someone. Rationalization isn't even necessary here. You wanted to do something - you did it. Selfishness isn't that complicated.

My guess would be that this individual had read Atlas Shrugged and hadn't fully understood what selfish meant in the context. Ayn Rand was setting out to redefine the word, not to glorify the "old" meaning.

Comment author: Tiiba2 08 November 2007 04:59:02AM 4 points [-]

I think that people are born selfish. This is based on the fact that, if I grew up in an environment that didn't compel me to be either selfish or selfless, I'd probably be selfish. Babies are selfish. I value those creatures that I was convinced to value. Slave owners taught their children that it's okay to beat slaves, and the children were happy to comply. Now most people disregard the pain of food animals because they can get away with it.

Of course, some of my actions are genuinely altruistic. I chose to give up meat, although this brings me little tangible benefit. (It does get me out of some accusations of hypocricy.) One reason why I let myself become like this is that in human society, being nice is a habit that keeps my ass from getting kicked. And it needs to be a habit, because I'm not smart enough to delude everybody that I care, when I actually see them all as obstacles.

If I somehow become so powerful that I no longer depend on anyone, and noting they do can harm me, I will probably quickly become corrupted by my power.

But I agree that now, I can't be considered purely selfish.

"So, it seems that Eliezer's working definition of an intelligent person is "someone who agrees with me"."

My definition of an intelligent person is slowly becoming "someone who agrees with Eliezer", so that's all right. Plus, the guy showed ability to revise a strongly held belief.

Comment author: TGGP4 08 November 2007 05:01:06AM 2 points [-]

Read the comments at Hopefully Anonymous' most recent post. He explains why he has been inactive.

I want you to be altruistic, Eliezer. That's partly because I think you're intelligent. I would prefer if some people were more selfish though.

I choose living in comfort for one more year. There are things I might die for, but I don't know what exactly. Perhaps to spite someone. If other people knew that I had the chance to save the world and were going to punish me for failing to do so, I might not risk their wrath. I also choose the stanger getting tortured, but I might risk the toe-stubbing to prevent a policy of torture. I'd steal Gates' money (take that, beneficiary of intellectual property laws!) but really I wouldn't care if you stole a thousand dollars from me and I never found out (unless you meant finding out who specifically did it rather than finding out that it had happened at all).

Comment author: Gray_Area 08 November 2007 05:10:34AM 8 points [-]

"My definition of an intelligent person is slowly becoming 'someone who agrees with Eliezer', so that's all right."

That's not in the spirit of this blog. Status is the enemy, only facts are important.

Comment author: Tiiba2 08 November 2007 06:07:38AM 7 points [-]

"That's not in the spirit of this blog. Status is the enemy, only facts are important."

See? Another smart man agrees with Eliezer. That's what I'm talking about.

Comment author: Constant2 08 November 2007 06:09:09AM 1 point [-]

Almost as though Eliezer isn't a person, but a system of thought.

Comment author: Stephen 08 November 2007 06:24:31AM 2 points [-]

Taking a cue from some earlier writing by Eli, I suppose one way to give ethical systems a functional test is to imagine having access to a genie. An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie. A selfish person might say to the genie "create the scenario I most want/approve of." Then it would be impossible for the genie to carry out some horrible scenario the selfish person doesn't want. For this reason selfishness wins some points in my book. If the selfish person wants the desires of others to be met (as many people do), I, as an innocent bystander, might end up with a scenario that I approve of too. (I think the only way to improve upon this is if the person addressing the genie has the desire to want things which they would want if they had an unlimited amount of time and intelligence to think about it. I believe Eli calls this "external reference semantics.")

Comment author: NaomiLong 12 October 2011 08:32:38PM 1 point [-]

It seems like this is based more on the person's ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.

Comment author: Eliezer_Yudkowsky 08 November 2007 06:27:25AM 8 points [-]

"Do not seek to follow in the footsteps of the wise, seek what they sought." -- Nanzan Daishi, quoted by Matsuo Basho.

Comment author: Tiiba2 08 November 2007 07:02:44AM 0 points [-]

You most certainly are right. He is a fool who disagrees.

Comment author: Gray_Area 08 November 2007 07:15:21AM 2 points [-]

Stephen: the altruist can ask the Genie the same thing as the selfish person. In some sense, though, I think these sorts of wishes are 'cheating,' because you are shifting the computational/formalization burden from the wisher to the wishee. (Sorry for the thread derail.)

Comment author: Tiiba2 08 November 2007 07:17:57AM 2 points [-]

"An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie."

Eh? An altruist would voluntarily summon disaster upon the world?

By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive? Is it really so bad? If, when you imagine your brain rewired, you envision something that is too alien to be considered you, or too devoid of creative thought to be considered alive, it's possible that an AI ordered to make you happy would choose some other course of action. It would be illogical to create something that is neither you nor happy.

Comment author: Robin_Hanson2 08 November 2007 09:08:53AM 5 points [-]

Humans seem to gain social status by persuading others to agree with them. This is one of the reasons we resist being persuaded by good arguments. So a selfish person who wanted social status could want you to be selfish also in order to gain social status, showing our dominance via the submission of others.

Comment author: Luciano_Dondero 08 November 2007 10:49:06AM 1 point [-]

It seems to me that this discussion is somewhat misleading. Each one of us members of the Homo Sapiens species operates to pursue his/her own interests, as dictated by her/his genetic code. But in order to do so, we have got to cooperate with some of our fellow human beings. Each and every one of our actions is in some ways a combination of these two typical aspects of our behavior. There is no such thing as a totally selfish or a totally unselfish behavior/action/activity, much less can we talk of a totally selfish or a totally unselfish person -- only extreme psychopaths (of the kind that become serial killers) may get close to represent an exception to this. However, we define selfish or unselfish behavior in others with respect to ourselves, either directly or indirectly, and our perception is inevitably biased. This is particularly obvious in personal relationships, where the pull of the distinct genetic programs mandates both cooperation and conflict -- and we may perceive selfishness or unselfishness in our partner, at times in ways that are somewhat contradictory to his/her intent and/or his/her deeper interests/needs/whatever. Whether we chose to declare ourselves selfish or unselfishness, and try to govern our actions to implement that self-description, or not, again, this is in part related to our genetic pull to fulfill our "destiny" mediated by our experience, culture, material interests, sexual inclinations, and so on. But it seems to me that it would be wrong to actually take for good any such self-description, and it's worse still to actually demand that people be consistent with that. In the end, that's not very far from judging someone's character from his/her zodiac sign...

Comment author: Luciano_Dondero 08 November 2007 10:52:38AM 0 points [-]

In my above comment at 5:49 AM, the sentence "Whether we chose to declare ourselves selfish or unselfishness" should actually read: "Whether we chose to declare ourselves selfish or unselfish"

Comment author: Pablo_Stafforini 08 November 2007 12:21:40PM 6 points [-]

I said, "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

The objection you press against your interlocutor was anticipated by Max Stirner, the renowned champion of egoism, who replied as follows:

Do I write out of love to men? No, I write because I want to procure for my thoughts an existence in the world; and, even if I foresaw that these thoughts would deprive you of your rest and your peace, even if I saw the bloodiest wars and the fall of many generations springing up from this seed of thought — I would nevertheless scatter it. Do with it what you will and can, that is your affair and does not trouble me. You will perhaps have only trouble, combat, and death from it, very few will draw joy from it.

If your weal lay at my heart, I should act as the church did in withholding the Bible from the laity, or Christian governments, which make it a sacred duty for themselves to 'protect the common people from bad books'. But not only not for your sake, not even for truth's sake either do I speak out what I think. No —

I sing as the bird sings That on the bough alights; The song that from me springs Is pay that well requites

I sing because — I am a singer. But I use you for it because I — need ears.

Comment author: Caledonian2 08 November 2007 12:28:19PM 1 point [-]

Selfishness - or to avoid confusion, let's call the concept 'self-interest' - takes on rather a different appearance when it's realized that the 'self' is not something necessarily limited to the boundaries of the physical form that embodies the distinction.

To the degree that we identify with and value the rest of humanity, sacrificing one's own existence to preserve the rest of humanity can be in the self-interest. To the degree that we don't, or that we negatively value the rest of humanity, that action can be against self-interest. If we disliked humanity enough, we'd choose to destroy it even if it cost us our own lives (which presumably we'd value) in the process.

Comment author: Caledonian2 08 November 2007 12:37:21PM 1 point [-]

So, it seems that Eliezer's working definition of an intelligent person is "someone who agrees with me".

Communities form because they repress the incompatible. Particularly on the Internet, where it's easy to restrict who can participate, people tend to agree not because they persuade one another but because they seek out and associate with like-minded people.

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

Comment author: ehj2 08 November 2007 02:12:54PM 0 points [-]

If we posit an ideal world where every person has perfect and complete knowledge, and the discipline and self-control to act consistently on that knowledge, it's possible we can equate the most self-interested act with the most ethical.

Until we have that ideal world, to posit that people always simply do what they want to do anyway, and rationalize their behavior to their philosophy of life, is to engage in a bit of the same rationalization when we conclude that "if only they knew as much as I know, they would do what I think they should do."

Too much of what we are currently discovering about actual human behavior has to get swept under the rug to equate selfishness with ethics. [i.e., Stanford Prison Experiment.]

/ehj2

Comment author: Brandon_Reinhart 08 November 2007 03:02:16PM 3 points [-]

Are providing answers to questions like "Would you do incredible thing X if condition Y was true" really necessary if thing X is something neither person would likely ever be able to do and condition Y is simply never going to happen? It seems easy to construct impossible moral challenges to oppose a particular belief, but why should beliefs be built around impossible moral edge cases? Shouldn't a person be able to develop a rational set of beliefs that do fail under extreme moral cases, but at the same time still hold a perfectly strong and not contradictory position?

Comment author: Michael_Sullivan 08 November 2007 04:29:47PM 2 points [-]

Obviously Eliezer thinks that the people who agree with the arguments that convince him are intelligent. Valuing people who can show your cherished arguments to be wrong is very nearly a post-human trait - it is extraordinarily rare among humans, and even then unevenly manifested.

On the other hand, if we are truly dedicated to overcoming bias, then we should value such people *even more highly* than those whom we can convince to question or abandon *their* cherished (but wrong) arguments/beliefs.

The problem is figuring out who those people are.

But it's very difficult. If someone can correctly argue me out of an incorrect position, then they must understand the question better than I do, which makes it difficult or impossible for me to judge their information. Maybe they just swindled me, and my initial naive interpretation is really correct, while their argument has a serious flaw that someone more schooled than I would recognize?

So I'm forced to judge heuristically by signs of who can be trusted.

I tentatively believe that a strong sign of a person who can help me revise my beliefs is a person who is willing to revise *their* beliefs in the face of argument.

Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me. The occasional mockery and disdain for those who disagree is a bit of a negative signal.

But this comment here is not a negative signal at all, for me. Why? Because even if Eliezer was wrong, the other party's willingness to reexamine is a strong signal of intelligence. Confirmation bias is so strong, that the willingness to act against it is of great value, even if this sometimes leads to greater error. A limited, faulty error correction mechanism (with some positive average value) is *dramatically* better than no error correction mechanism in the long run.

So yes, if I can (honestly) convince a person to question something that they previously deeply held, that is a sign of intelligence on their part. Agreeing with me is not the signal. Changing their mind is the signal.

It would be a troubling sign for *me* if there were no one who could convince me to change any of my deeply held beliefs.

Comment author: Eliezer_Yudkowsky 08 November 2007 05:00:12PM 6 points [-]

It's not that the one agreed with me and declared himself no longer selfish, but that he showed nonzero reactivity in the face of an unexpected argument, a rare thing. Further conversation (not shown) did seem to show that he was thinking about it. You don't see, say, Caledonian ever updating his views, or showing nonzero dependency between what I say and his ability to comment negatively on every post.

Comment author: Caledonian2 08 November 2007 08:08:47PM 0 points [-]

Ah, but I have yet to be confronted with an argument that would cause me to update my views.

So now the problem expands: if two people disagree about the worth of an argument, what criteria do we use to choose between them? What if we ARE one of the two people? How do we use our own judgement to evaluate our own judgement?

Simply, we can't. At best, we can look for simple errors that we presume we can objectively determine, and leave the subtle issues to other systems.

I don't see you admitting you were wrong in the previous threads, Eliezer. Should I interpret that as your unwillingness to admit to error, or that you're so much smarter than me that I can't even comprehend how you're actually correct?

Comment author: George_Weinberg 08 November 2007 08:39:27PM 1 point [-]

"Selfish" in the negative sense means not just pursuing one's own interests, but doing so heedless of the harm one's actions may be causing others. I don't think there are many proponents of "selfishness" in this sense.

There are people that are "selfless" in the sense that they not only don't act according to their direct self-interest, they even abandon their own concepts of true and false, right and wrong, trusting some external authority to make these judgments for them. Religious, political, whatever. People who praise selfishness are generally contrasting it with this kind of selflessness.

Comment author: Brandon_Reinhart 08 November 2007 09:16:45PM 1 point [-]

My understand is that the philosophy of rational self-interest, as forwarded by the Objectivists, contains a moral system founded first on the pursuit of maintaining a high degree of "conceptual" volitional consciousness and freedom as a human being. Anything that robs one's life or robs one's essential humanity is opposed to that value. The Objectivist favor of capitalism stems from a belief that capitalism is a system that does much to preserve this value (the essential freedom and humanity of individuals). Objectivists are classical libertarians, but not Libertarians (and in fact make much of their opposition to that party).

I believe that an Objectivist would welcome the challenges posed in the post above, but might not consider them a strong challenge to his beliefs simply because they aren't very realistic scenarios. Objectivists generally feel that ethics need not be crafted to cover every scenario under the sun, but instead act as a general guide to a principled life that upholds the pursuit of freedom and humanity.

> "If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

In the long run, exploiting others seems likely to end up a dead end road. It might be rational and rewarding in the short term, but ultimately it is destructive. Furthermore, it seems to be a violation of principle. If I believe in my own freedom and that I would not want to be misled, I should not attempt to rob the freedom or mislead others without significant compelling reason. Otherwise, I'm setting one standard for my own rights and another for others. By my example, then, there would be no objective ethical standard for me to object against someone attempting to mislead or exploit me. After all, if I set a subjective standard for behavior why shouldn't they? But this isn't rigorous logic and smacks of a rationalization as referenced here:

> "But what I really want to know is this: Did you start out by thinking that you wanted to be selfish, and then decide this was the most selfish thing you could possibly do? Or did you start out by wanting to convert others to selfishness, then look for ways to rationalize that as self-benefiting?"

The problem seems to be more general: argument with the intent of converting. That intent alone seems to cast suspicion on the proceedings. A rational person would, it seems to me, be willing to lay his arguments on the table for review and criticism and discussion. If, at some point in the future, others agree they are rational arguments and adopt them as beliefs then everyone should be happy because the objectives of truth and learning have been fulfilled. But "converting" demands immediate capitulation to the point of discussion. No longer is the discussion about the sharing of ideas: reward motivators have entered the room.

Self-edification that one's own view has been adopted by another seems to be a reward motive. Gratification that a challenge has been overcome seems to be a reward motive. Those motives soil the discussion.

> And the one said, "You may be right about that last part," so I marked him down as intelligent.

The man is intelligent, not because he agreed with Eli's point, but because he was reviewing his beliefs in light of new information. His motive was not (at least not entirely) conversion, but genuine debate and learning.

"Intelligence is a dynamic system that takes in information about the world, abstracts regularities from that information, stores it in memories, and uses it knowledge about the world to form goals, make plans and implement them."

The speaker is doing just that. He might later choose to reject the new information, but at this time he is indicating that the new information is being evaluated.

Comment author: GNZ 09 November 2007 11:30:03AM 0 points [-]

like pablo I have considered whether arguing for in my case altruism/utilitarianism is always altruistic and thought "well probably" - but I dont analyse it much because in the end I don't know if it would matter - it seems I do what I do because 'that is what I am', more than 'that is what I think is right'. I guess it works the other way too eh.

Comment author: Daniel_Humphries 09 November 2007 06:43:58PM 1 point [-]

Michael Sullivan:

That's an exceptionally clear exegesis. Thanks!

Pablo Stafforini:

The words of Max Stirner (with whom I am admittedly unfamiliar) that you quote seem to me like so much bluster and semantic question-begging.

Do I write out of love to men? No, I write because I want to procure for my thoughts an existence in the world; and, even if I foresaw that these thoughts would deprive you of your rest and your peace, even if I saw the bloodiest wars blah blah blah

He sings not out of love for the hearer, but because he loves to sing and the hearer is useful in the act of singing? Do I have that right? That is... if his tree falls in the forest and no one is around, it does not make a sound?

Many philosophers (myself included, I believe), would argue that he is describing the functional definition of love: action and desire passing back and forth between two (or more) beings, each one depending on the other for his or her fulfilment and happiness. But it seems he wants to say that his dependence on others is a sign of isolation and not connection... I know my wording here is indefinite, but that's because Stirner's is. How is this bit of poetry anything more than blustering rationalization after-the-fact?

Does Max Stirner offer a less macho, less silly, more considered response to the objections that Eliezer raised with his "selfish" interlocutor?

Comment author: buybuydandavis 20 October 2011 11:20:16AM 1 point [-]

Does Max Stirner offer a less macho, less silly, more considered response to the objections that Eliezer raised with his "selfish" interlocutor?

I'm a fan of Stirner, but have always found this particular passage disingenuous. Certainly he sings because he is a singer. It's possible that he is unconcerned with the effect of his song on those without the ears to hear it, but I don't find it credible that he had no hope of benefiting those with the ears to hear. So I think he writes, at least somewhat, out of love for at least some men.

If you're genuinely selfish, then why do you want me to be selfish too?

Because the usual forms of unselfishness are based in conceptual confusions that make you less useful to me, and often downright dangerous. And, it's both sad and rather distasteful to watch you live your life in such a crippled fashion. Such waste offends my sensibilities.

I would much prefer that my neighbors live for themselves, than live for God, Gaia, Allah, Evolution, Justice, the State, The Volk, The Proletariat, etc.

Comment author: TheOtherDave 20 October 2011 01:34:53PM 0 points [-]

I can see where this might be true, but I can also see where it might be mere sophistry concealing a fundamental concern for your neighbor's well-being. Can you provide some concrete examples of typical ways in which your neighbors' unselfishness lessens their usefulness to you?

Comment author: buybuydandavis 21 October 2011 12:00:18AM 2 points [-]

I guess I didn't make myself clear. I'm not concealing a fundamental concern for my neighbor's well being. I have it, and I think Stirner does too, despite his disingenuous denial here. Hence my comment that he writes, at least in part, out of love for some men.

Stirner, The Ego and It's Own:

I love men, too, not merely individuals, but every one. But I love them with the consciousness of my egoism; I love them because love makes me happy, I love because loving is natural to me, it pleases me. I know no 'commandment of love'. I have a fellow-feeling with every feeling being, and their torment torments, their refreshment refreshes me too

It's the commandment of love he rejects, not his own love.

Loving other men is no more unselfish than loving your car. You like it shiny and running well, maybe even after you sell it to someone else.

It is perfectly selfish to love what you love, and hate what you hate. To care about what you care about. Why should I limit my concerns to what lies in a 1 inch bubble around myself? It's not what concerns you that makes you an egoist, it's whether you bow to an ideological compulsion to serve an alien concern over your own.

My own take on Stirner's Egoism is that it is best distinguished as the antidote to various forms of Moral Objectivism, not Altruism.

Comment author: taelor 21 October 2011 07:08:00AM *  0 points [-]

It is perfectly selfish to love what you love, and hate what you hate. To care about what you care about. Why should I limit my concerns to what lies in a 1 inch bubble around myself? It's not what concerns you that makes you an egoist, it's whether you bow to an ideological compulsion to serve an alien concern over your own.

Note that Stirner believed that it is impossible to serve an alien concern over your own. That fact that you are concerned with something makes it your concern. Stirner called people who claim to serve an alien concern above their own "involuntary egoists", and found the entire state of affairs to be laughably absurd.

Comment author: antigonus 25 October 2011 05:17:52PM 0 points [-]

Loving other men is no more unselfish than loving your car.

This makes little sense to me. Other people, unlike cars, have interests; and loving other people tends to have the effect of causing one to adopt those interests as one's own. What exactly is unselfishness supposed to look like, if not that?

Comment author: TGGP4 09 November 2007 09:16:25PM 0 points [-]

Daniel, you can read Stirner's book here. It's not really "macho", that would be more Ragnar Redbeard's "Might Makes Right".

Among the things Stirner writes about is freedom of expression. He does not care for what the state or the church say he may write, because he takes such freedom for himself (many unauthorized printings of his book were made). He does not respect the holy but instead regards taboos against blasphemy as attempted restrictions on him that he will violate. For people who say that he ought not to speak of certain things because they are horrible and upsetting, he says the uninterrupted calm of others is not of his concern.

Stirner does not reject all notions of love or even alms-giving. He just views them in an egoistic manner, imagining a Union of Egoists (which may consist in something as simple as two friends going for a walk) that find benefit in each other.

Does Max Stirner offer a less macho, less silly, more considered response to the objections that Eliezer raised with his "selfish" interlocutor? Stirner can be said to offer a response (though I suppose not in a literal sense since he has been dead for so long) but you do not strike me as inclined to give it a fair reading.

Comment author: Daniel_Humphries 09 November 2007 11:16:00PM 1 point [-]

TGGP:

Thanks for the tip. I'll check it out.

Comment author: Recovering_irrationalist 10 November 2007 02:29:43AM 4 points [-]

Eliezer's descriptions of his intellectual history and past mistakes are very convincing positive signals to me.

I agree, but have a nagging doubt. When I read years-old writings where he makes some of those mistakes he sounds about as knowledgeable, just as smart, just as honest, and just as sure he's right as he is now in his new beliefs.

Although I was convinced by very few of those older mistakes (before I searched and found retractions) that could just as easily mean new arguments got super persuasive rather than super accurate.

His writings have convinced me to change many beliefs recently. How much is that down to elite arguing skills well-practiced from convincing the toughest judge of all, himself? What ratio of those beliefs are likely to be wrong, however convincing the arguments sound to me, and to him?

Believe it or not, this isn't meant to be critical. I can't fault the way he's currently guiding his belief system, and to me he seems further along the path I'm trying to start on than anyone I know. I'm just not sure how to objectively judge from back here how far along the path he's really managed to get.

Thoughts about other sources of knowledge welcome, this doesn't need to be about one person.

Comment author: Recovering_irrationalist 10 November 2007 02:38:17AM 0 points [-]

PS. I know arguments should be judged based on their own worth and not who made them, but there are other factors.

Comment author: Caledonian2 10 November 2007 04:17:27AM 2 points [-]

Sometimes people change their minds because new evidence and novel arguments have forced them to re-evaluate their positions.

Other times, people have never thought through even the obvious arguments, or are convinced easily by weak data and unimpressive theses, so they shift from one point to another.

If someone changes their positions frequently, AND they're very confident in their positions, that's a bad sign.

Comment author: gutzperson 10 November 2007 09:45:27AM 1 point [-]

Eliezer: “Other mischievous questions to ask self-proclaimed Selfishes: "Would you sacrifice your own life to save the entire human species?" (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.)”

What If the person saving the entire human species wanted to commit suicide anyway or if he/she has this dream of heroism and hopes to become immortal and be rewarded in an afterlife. An apparently selfless deed can be a very selfish one.

Comment author: ChrisA 10 November 2007 09:52:17AM 0 points [-]

The question Eliezer raises is the first problem any religious person has to face once he abandons the god thesis, i.e. why should I be good now? The answer, I believe, is that you cannot act contrary to your genetic nature. Our brains are wired (or have modules in Pinker terms) for various forms of altruism, for group survival reasons probably. I therefore can’t easily commit acts against my genetic nature, even if intellectually I can see they are in my best interests. (As Eliezer has already recognised this is why AI or uploaded personalities are so dangerous; they will be able to rewrite the brain code that prevents widespread selfishness. I say dangerous of course, because likely the first uploaded person or AI will not be me, so they will be a threat to me.)

More simply, the reason I don't steal from people is not that stealing is wrong, but that my genetic programming (perhaps also an element of social conditioning) is such that I don’t want to steal, or have an active non-intellectual aversion to stealing.

Why do I try to convince you of this point of view if I am intellectually convinced that I should be selfish? I agree with Robin, it is because I am gentically programmed to do so, probably related to status seeking. Also, I genuinely would like to hear arguments againt this point of view, in case I am wrong.

Eliezer, genetics as a source of our ethical actions mean that it is unlikely we can ever develop a consistent ethical theory, if you accept this does this not present a big problem for your attempt to create an ethical AI? Is it possible your rejection of this approach to ethics and your attempt to prove a standalone moral system is perhaps subconciously driven by the impact this would have on your work?

Comment author: Recovering_irrationalist 10 November 2007 07:03:14PM 2 points [-]

If someone changes their positions frequently, AND they're very confident in their positions, that's a bad sign.

I don't think he changes his mind too frequently, and being overly confident at the time of now-abandoned positions isn't unusual. My point was that in Eliezer's case a knowledgeable, smart, honest, and self-certain argument doesn't imply strong evidence of truth, because those qualities appear in arguments that turned out false.

To be honest I think I was hoping someone would leap to his defense and crush my argument, giving me permission to be as sure as he is about his beliefs that I've adopted, whereas what I should do is keep a healthy amount of skepticism and resist any urge to read "Posted by Eliezer" as "trust this".

Comment author: Nick_Tarleton 11 November 2007 07:39:14PM 0 points [-]

ChrisA, why are you intellectually convinced you should be selfish? Rationality doesn't demand any particular goals. A genuinely altruistic person, if uploaded, would overwrite the "brain code" (a bad analogy; evolved tendencies aren't deterministic "code") that promotes selfishness.

Comment author: Eliezer_Yudkowsky 11 November 2007 09:38:32PM 0 points [-]

Recovering,

While I wish I had something reassuring to say on this subject, you should probably be quite disturbed if you find my work from 1997 sounding as persuasive as my work from 2007.

Comment author: ChrisA 12 November 2007 09:53:26AM 1 point [-]

Nick

My response is, evolution! Let's say a genuinely (what ever that means) altruistic entity exists. He then is uploaded. He then observed that not all entities are fully altruistic, in other words they will want to take resources from others. In any contest over resources this puts the altruistic entity at a disadvantage (he is spending resources helping others that he could use to defend himself). With potentially mega intelligent entities any weakness is serious. He realises that very quickly he will be eliminated if he doesn't fix this weakness. He either fixes the weakness (becomes selfish) or he accepts his elimination. Note that uploaded entities are likely to be very paranoid, after all when one is eliminated, a potentially immortal life is eliminated, so they should have very low discount rates. You might be a threat to me in a million years, so if I get the chance I should eliminate you now.

If your answer is that the altruistic entities will be able to use cooperation to defend themselves against the selfish ones, you must realise there is nothing to stop a genuinely selfish entity from pretending to be altruistic. And the altruistic entities will know this.

I don't think that most people realise that the reason we can work as a society is that we have hardwired cooperation genes in us, and we know that. We are not altruistic through choice. Allow us to make the decision on whether to be altruistic and the game theory becomes very different.

Comment author: Recovering_irrationalist 12 November 2007 12:05:30PM 2 points [-]

While I wish I had something reassuring to say on this subject, you should probably be quite disturbed if you find my work from 1997 sounding as persuasive as my work from 2007

But I said...

Although I was convinced by very few of those older mistakes (before I searched and found retractions) that could just as easily mean new arguments got super persuasive rather than super accurate.

All my comments mean in practice is that even though once I study and investigate your (new) arguments they nearly always seem to me to be right, I won't let myself start lazily suspending critical judgment and investigation of your beliefs before adopting them. I hope you agree that's a good thing.

Comment author: Eliezer_Yudkowsky 12 November 2007 06:56:20PM 2 points [-]

Fair enough, Recovering. My own point is that:

When I read years-old writings where he makes some of those mistakes he sounds about as knowledgeable, just as smart, just as honest, and just as sure he's right as he is now in his new beliefs.

Then those factors aren't very good discriminators of truth, are they? It's not just "improper" to take them into account, it actually doesn't work.

In whatever facets I sounded about as "knowledgeable", "smart", "honest", or "self-assured" then as now, you might take these facets into account in deciding whether someone's arguments are worth your time to read, but you shouldn't take them into account in deciding whether the person is right. Whatever it is that caused you to reject most of my old self's beliefs regardless, is what's doing the actual work of discriminating truth from falsehood, not those other perceptions.

Comment author: Recovering_irrationalist 12 November 2007 10:00:09PM 2 points [-]

In whatever facets I sounded about as "knowledgeable", "smart", "honest", or "self-assured" then as now, you might take these facets into account in deciding whether someone's arguments are worth your time to read, but you shouldn't take them into account in deciding whether the person is right.

Agreed. Having said that, I do find those facets to correlate with truth, but the correlation flattens out for high values. Besides, the first two would be hard for me to judge well between your 1997 and 2007 selves, for obvious reasons. Maybe with the right efforts my 2012 self could get close enough to tell.

Whatever it is that caused you to reject most of my old self's beliefs regardless, is what's doing the actual work of discriminating truth from falsehood, not those other perceptions.

That only works if your new self's views are true, rather than just closer to the truth, or better argued, or less alarm-bell-raising, or fitting better with how my mind works, or what I already believe, or what I want to believe, etc. etc.. That was my point.

Don't worry, it's my neurosis not yours. :-)

Comment author: Mark_Nau 15 November 2007 11:28:07AM 1 point [-]

When I say that I am selfish, I mean to express that I think the best model for "altruism" is that of a good consumed much like any other. I consume it for my personal enjoyment, not in proportion to the benefit received by the recipient. And, ceteris paribus, with a declining marginal utility as quantity consumed increases.

In my eyes, a "true" altruist would value the 1000th meal provided to a starving third-worlder on par with the 1st one provided, given that the beneficiaries valued the meals similarly. Nobody behaves that way. Altruism is a terrible model for human behavior.

This sort of scope insensitivity isn't a logical error for selfish people. I have every reason to value the 1000th orange I consume this week less than the 1st. It's only a conundrum for people claiming to be altruistic. And I would think it would be a killing blow.

Comment author: J_Thomas 15 November 2007 01:01:32PM 0 points [-]

Mark, altruists have to deal with their costs too.

It's possible for an altruist to value the thousandth altruistic meal as much as the first, but as his resources shrink the value of the alternatives rises. If I provide meals for a hundred thousand starving people and then I have nothing left and I become a starving person myself, that isn't good. At some point I want to keep enough capital to maintain my continuing ability to feed starving people.

I'm not claiming that it's true that no altruist experiences diminishing returns, or even that there is an altruist who doesn't experience diminishing returns. But the behavior doesn't prove that there couldn't be, and so this isn't a killing blow.

Comment author: Mark_Nau 15 November 2007 01:25:26PM 1 point [-]

J Thomas,

A non-diminishing-returns altruist would hit a point where the utility of spending a marginal resource on a "selfish" purpose dips below the best use of that resource for an "altruistic" purpose. *Every* *single* marginal resource after that should go toward altruistic purposes as well. Why? Because for anyone with non-astronomical resources, there is effectively an endless supply of altruistic options that all provide effectively the same degree of benefit to the recipient. The non-diminishing-returns altruist would increase altruistic allocation in 1:1 proportion to increases in resources.

I know of nobody like this, and it strikes me intuitively as a horrible starting point for a model of any portion of human behavior.

Goods that fall under the heading of "altruistic" are just like any other goods, with people exhibiting different personal tastes and preferences to consume them for their own benefit.

Comment author: James_M. 18 November 2007 09:20:56PM 1 point [-]

As a selfish prizefighter, I want to beat my opponent. If I was an altruist instead, I don't think I'd be able to win one fight. Because I am in fact selfish, fighting an opponent who is an altruist would not do much for my self-esteem. Only in fighting better fighters than I am do I learn, not by fighting someone inferior. If a superior fighter does not do his best in a given match with me for some reason, I cannot objectively pretend to be better than him just because I won once. It benefits me to beat him when he's at his best. I like to share my knowledge, so I teach others. It benefits me when someone learns a technique I teach them well, and puts their own take on it. Thus my student becomes my teacher, and I am that much better off for it. There may come a time when my student defeats me, and though I will probably be upset about getting old and slow, a part of me will be proud of him, and of myself.

Anyone I've met that's worth their salt is generally not afraid of their own shadow, and don't horde ideas or knowledge, afraid that someone will outdo them. Regardless someone always does. If in life you either sink or swim, merely floating is like compromising between life and death, and between the two, only death gains from life not vice-versa.

It's a philosophy of life, so of course there will be people who disagree, or don't really follow even if they do agree. But in terms of what kinds of people gravitate to each other, even if you disagree you're probably more likely to gravitate to people who are good at what they do and are willing to teach you. Thus I have met people who are sufficiently selfish, but not necessarily objective or good at what they do, and a load of other permutations, but I've never met someone who's exceptional at what they do who isn't selfish. You don't get good by not knowing what you want and not achieving it.

Comment author: Tim_Tyler 27 July 2008 05:14:52PM 0 points [-]

Re: It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior

This is almost certainly not true of conscious genetic selfishness. Such individuals can be expected to engage in various rare and unusual activities - such as donating to sperm banks.

Comment author: [deleted] 12 August 2010 04:51:41AM 1 point [-]

There's all kinds of incoherency with the idea of "selfishness" being defined as "acting in your own interest." What is your own interest? Can't you define it circularly as being anything you desire to do? Being a "selfish person" doesn't necessarily make sense by that definition.

Maybe it's better to go the opposite direction. Selfishness is indifference to the desires and well-being of others. Whatever else you may be doing (let's drop the question of whether it's "self-interested" or not), it's more important to you than other people. I wouldn't be surprised if 100% selfish people exist in this sense. All you'd have to do is never make a decision where the deciding factor is someone else's well-being.

Comment author: buybuydandavis 21 October 2011 12:02:03AM 2 points [-]

I'm relieved that at least a few people mentioned Stirner. The selfishness EY portrays is not representative of selfishness in the sense any of the literature of Philosophical Egoism of which I am aware. His scenario doesn't even correspond to what a Randian might believe about selfishness.

Max Stirner is the best and most intellectually consistent egoist I'm aware of. Less accomplished but more polemical writers in the same vein include John L. Walker, Benjamin Tucker, John Badcock, Dora Marsden (for a period of time), and Sid Parker.

Max Stirner's "The Ego and His Own" is available online at various sites, including Gutenberg. Many of the less canonical works can be found at: http://i-studies.com/journal/index.shtml.

Works by Marsden (Freewoman) and Parker (Minus One) are archived there, as well as issues of Non Serviam and i-studies, published by Svein Olav Nyberg. The Nyberg publications contain scattered articles by Prof. Lawrence Stepelevich, the one time president of the Hegel Society of America, who is the best professional philosopher on Stirner that I am aware of (most are just awful), although I've heard good things about John F. Welsh's "Max Stirner's Dialectical Egoism: A New Interpretation".

My own take on Stirner's Egoism is that it is best distinguished as the antidote to various forms of Moral Objectivism, not Altruism. The Metaethics sequence, which I have not completed yet, leaves me thinking I'll feel the urge to share a few thoughts on Stirner once I'm done with the sequence.

Comment author: taelor 21 October 2011 06:50:12AM *  0 points [-]

I've always felt that both "selfishness" and "altruism" were results of the Fundamental Attribution Error. Some actions are deemed "selfish" according to society's mean value set, others are deemed "altruistic" or "selfless". Personally, I'm more interested in the chain of events that ultimately lead up to an action being performed and in the chain of events that occurred as a result of it than I am in applying labels of dubious and limited utility to things.

Comment author: blacktrance 26 June 2013 07:15:13PM *  0 points [-]

"If you're genuinely selfish, then why do you want me to be selfish too? Doesn't that make you concerned for my welfare? Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

You're conflating selfishness with vulgar egoism. Suppose your well-being makes me happy, and I believe that making you selfish will make you happier. Then convincing you to be selfish is the self-interested thing to do. If I tried to convince you to be more altruistic (in the self-sacrificing sense, not in the benevolent sense) so I could exploit you more, that would be bad for you, which outweighs the benefit I'd get from exploiting you. Selfishness is about maximizing your hedons, which does not at all imply not caring about others - in fact, it usually means caring for some others.

Comment author: zackkenyon 26 June 2013 07:36:02PM 0 points [-]

If I didn't know that someone was going to be tortured, I would rather stub my toe, and I do not claim to be selfish. Otherwise I am not really sure how to interpret the question.