Followup toSomething to Protect, Superhero Bias

Long-time readers will recall that I've long been uncomfortable with the idea that you can adopt a Cause as a hedonic accessory:

"Unhappy people are told that they need a 'purpose in life', so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes."

But conversely it's also a fact that having a Purpose In Life consistently shows up as something that increases happiness, as measured by reported subjective well-being.

One presumes that this works equally well hedonically no matter how misguided that Purpose In Life may be—no matter if it is actually doing harm—no matter if the means are as cheap as prayer.  Presumably, all that matters for your happiness is that you believe in it.  So you had better not question overmuch whether you're really being effective; that would disturb the warm glow of satisfaction you paid for.

And here we verge on Zen, because you can't deliberately pursue "a purpose that takes you outside yourself", in order to take yourself outside yourself.  That's still all about you.

Which is the whole Western concept of "spirituality" that I despise:  You need a higher purpose so that you can be emotionally healthy.  The external world is just a stream of victims for you to rescue.

Which is not to say that you can become more noble by being less happy.  To deliberately sacrifice more, so that you can appear more virtuous to yourself, is also not a purpose outside yourself.

The way someone ends up with a real purpose outside themselves, is that they're walking along one day and see an elderly women passing by, and they realize "Oh crap, a hundred thousand people are dying of old age every day, what an awful way to die" and then they set out to do something about it.

If you want a purpose like that, then by wanting it, you're just circling back into yourself again.  Thinking about your need to be "useful".  Stop searching for your purpose.  Turn your eyes outward to look at things outside yourself, and notice when you care about them; and then figure out how to be effective, instead of priding yourself on how much spiritual benefit you're getting just for trying.

With that said:

In today's world, most of the highest-priority legitimate Causes are about large groups of people in extreme jeopardy.  (Wide scope * high severity.)  Aging threatens the old, starvation threatens the poor, existential risks threaten humankind as a whole.

But some of the potential solutions on the table are, arguably, so powerful that they could solve almost the entire list.  Some argue that nanotechnology would take almost all our current problems off the table.  (I agree but reply that nanotech would create other problems, like unstable military balances, crazy uploads, and brute-forced AI.)

I sometimes describe the purpose (if not the actual decision criterion) of Friendly superintelligence as "Fix all fixable problems such that it is more important for the problem to be fixed immediately than fixed by our own efforts."

Wouldn't you then run out of victims with which to feed your higher purpose?

"Good," you say, "I should sooner step in front of a train, than ask that there be more victims just to keep myself occupied."

But do altruists then have little to look forward to, in the Future?  Will we, deprived of victims, find our higher purpose shriveling away, and have to make a new life for ourselves as self-absorbed creatures?

"That unhappiness is relatively small compared to the unhappiness of a mother watching their child die, so screw it."

Well, but like it or not, the presence or absence of higher purpose does have hedonic effects on human beings, configured as we are now.  And to reconfigure ourselves so that we no longer need to care about anything outside ourselves... does sound a little sad.  I don't practice altruism for the sake of being virtuous—but I also recognize that "altruism" is part of what I value in humanity, part of what I want to save.  If you save everyone, have you obsoleted the will to save them?

But I think it's a false dilemma.  Right now, in this world, any halfway capable rationalist who looks outside themselves, will find their eyes immediately drawn to large groups of people in extreme jeopardy.  Wide scope * great severity = big problem.  It doesn't mean that if one were to solve all those Big Problems, we would have nothing left to care about except ourselves.

Friends?  Family?  Sure, and also more abstract ideals, like Truth or Art or Freedom.  The change that altruists may have to get used to, is the absence of any solvable problems so urgent that it doesn't matter whether they're solved by a person or an unperson.  That is a change and a major one—which I am not going to go into, because we don't yet live in that world.  But it's not so sad a change, as having nothing to care about outside yourself.  It's not the end of purpose.  It's not even a descent into "spirituality": people might still look around outside themselves to see what needs doing, thinking more of effectiveness than of emotional benefits.

But I will say this much:

If all goes well, there will come a time when you could search the whole of civilization and never find a single person so much in need of help, as dozens you now pass on the street.

If you do want to save someone from death, or help a great many people—if you want to have that memory for yourself, later—then you'd better get your licks in now.

I say this, because although that is not the purest motive, it is a useful motivation.

And for now—in this world—it is the usefulness that matters.  That is the Art we are to practice today, however we imagine the world might change tomorrow.  We are not here to be our hypothetical selves of tomorrow.  This world is our charge and our challenge; we must adapt ourselves to live here, not somewhere else.

After all—to care whether your motives were sufficiently "high", would just turn your eyes inward.

 

Part of The Fun Theory Sequence

Next post: "31 Laws of Fun" (sequence guide)

Previous post: "The Uses of Fun (Theory)"

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 7:11 AM

One wonders if it is possible to make finding one's purpose in life one's purpose in life. At least the logical paradoxes would be briefly amusing.

There is a line in Scary Movie 3 that goes something like this "What's your dream?" "to have a dream."

[-][anonymous]12y50

Have you seen Avenue Q?

That used to be my purpose in life, untill i discovered nihilism.

we must adapt ourselves to live here, not somewhere else.

I try to explain this when I get called a racist for not wanting to go to a poor/predominantly black and latino neighborhood. I have a high preference for a world in which race doesn't make a bit of difference. However today, in this world I am 2 orders of magnitude more likely to have a crime perpetrated against me in said neighborhood.

I also bring this up when people respond to my ideas with unlikely sinking ship/rescue raft ethical scenarios. There are enough regular problems that need solving. As the world becomes more gentle we can start putting more effort into the edge cases.

And again with global warming. I'd love to live in a world in which a .3% change in radiation absorption was our most pressing concern. However in this world we could save many more lives and species by refocusing our money and efforts.

in this world I am 2 orders of magnitude more likely to have a crime perpetrated against me in said neighborhood.

Presumably, then, your actual objection is to living in predominantly high-crime neighborhoods, and you'd be fine with poor/black/hispanic neighborhoods? I'd certainly suspect racism if the objection was "but poor = crime!" rather than "looking at the actual statistics, that neighborhood has high crime".

One wonders if it is possible to make finding one's purpose in life one's purpose in life.

Of course it is. That's philosophy right there :->

Perhaps a webpage should be written that attempts to persuade smart egotists that if they want to accumulate wealth for the long-term future, it would be rational to choose "good guy points" as that wealth, instead of e.g. money.

Since if our civilization doesn't self-destruct, we will end up spending billions of years in a Happy World where there are no more "good guy points" available to accumulate. It will be the ultimate scarce resource, in a world where very few things are scarce. Accumulating money instead of "good guy points" seems stupid to me, if one is thinking for the long-term.

I'll also note here, that during billions of years of Happy Life, historians will certainly study every last detail of the very limited amount of pre-utopia history that the world experienced. We'll probably see fine-tuned High Score tables of how many "good guy points" each surviving individual has collected. (This study of history will be made substantially easier once we have the technology to extract the memories of every willing person into the public domain.)

I know this is 12 years old but this is one of the greatest things I have ever read.

"I try to explain this when I get called a racist for not wanting to go to a poor/predominantly black and latino neighborhood. I have a high preference for a world in which race doesn't make a bit of difference. However today, in this world I am 2 orders of magnitude more likely to have a crime perpetrated against me in said neighborhood."

This bit of haphazard reasoning could be more focused: what neighborhood the commentator leaves to enter a predominantly poor or minority neighborhood (itself a hazy conflation), the time of day, the type of crime, frequency of of crime reported in said hood, etc. Information is available to more accurately predict whether one might be the victim of a crime, but I'm not sure the commentator has that information in his/her head as a motivator for the decision in real time.

This makes eudaimonist egoism seem simpler, more elegant by comparison. I don't need a stream of victims now, and I won't need them post-Singularity.

Daniel Dennett's standard response to the question "What's the secret of happiness" is "The secret of happiness is to find something more important than you are and dedicate your life to it."

I think that this avoids Eliezer's criticism that "you can't deliberately pursue 'a purpose that takes you outside yourself', in order to take yourself outside yourself. That's still all about you." Something can be more important than you and yet include you. Depending on your values, the future of the human race itself could serve as an example. It would seem also to be still an available "hedonic accessory" in any eutopia that includes humanity in some form.

I read a lot of this type of stuff back in the 1990s. Your "purpose" doesn't need to be spiritual or altruistic or even helpful to others, all that is necessary is that it keeps you from dwelling on yourself and your own problems> Serious study, if it is interesting enough to you and difficult enough to really engage your attention is more than enough to gain you the benefits of a "purpose".

Perhaps a webpage should be written that attempts to persuade smart egotists that if they want to accumulate wealth for the long-term future, it would be rational to choose "good guy points" as that wealth, instead of e.g. money.

For an egoist with a low discount rate and broad enough self-concept to make transhumanity appealing, even this may be unnecessary to make Singularitarianism highly attractive. I suspect that the idea of benefiting yourself by helping everyone including yourself is unintuitive enough, though, to make this far less apparent than it could be.

Eliezer: For some reason this seems to me to bring a lot of things together very concisely, but doesn't seem to require too much background reading. I expect I will send people here.

Nazgul, Justin: Also of important is the actual rate and risk. If the original crime rate is quite low, two orders of magnitude more is still probably not enough risk to rationally pass on something of value. (Say, visiting a friend in said neighborhood.)

I've mostly heard this kind of argument from people who drive cars.

"There would never be another Gandhi, another Mandela, another Aung San Suu Kyi—and yes, that was a kind of loss, but would any great leader have sentenced humanity to eternal misery, for the sake of providing a suitable backdrop for eternal heroism? Well, some of them would have. But the down-trodden themselves had better things to do." —from "Border Guards"

Altruism doesn't only mean preventing suffering. It also means increasing happiness. If all suffering were ended, altruists will still have purpose in providing creativity, novelty, happiness, etc. Suffering then becomes not experience unthinkable levels of insert_positive_emotion_here and philanthropists will be devoted to ensure that all sentient entities experience all they can. The post-singularity Make-a-Wish foundation would experience rapid growth and expand their services as well as volunteers as they operate full-time with repeat customers.

And here we verge on Zen, because you can't deliberately pursue "a purpose that takes you outside yourself", in order to take yourself outside yourself. That's still all about you.

No, I mean, this looks reasonable. It probably gets at a higher truth of some sort. But it's the same mistake as thinking that a scientific hypothesis can't be true because it was inspired by an irrational belief. It shouldn't matter where you came up with your hypotheses--it only matters if it fits the evidence with the best explanation (and, ideally, makes the most falsifiable predictions that turn out to be true). Just because, for example, the drug company profits if everyone thinks their drug cures disease X doesn't mean that the drug doesn't actually cure disease X.

Just because my present self has an altruistic purpose only because my past self found pleasure in thinking that it would be an altruist in the future doesn't detract from my present self's actual altruism. (This is purely hypothetical, I am actually purely selfish across all time, space and possibility)

Come up with a spending plan for a trillion dollar stimulus package that suddenly lands in the collective lap of transhumanists and singularitarians. What goals would a trillion bucks solve? Anything?

What exactly will you do if someone decides to drop you a mountain of money, no strings attached? Can you spend a trillion usefully?

I see no problem letting altruism become obsolete.

I look at Causes as hedonic accessories from a different point of view: Given the history of how many Causes have turned out to contain purple kool-aid, I look at the problem as not so much "how can we carefully select rationally desirable Causes" but more nearly "How can we bypass this part of the human mental landscape altogether - preferably aquiring the hedonic gains of adopting a Cause, while skipping the hazards, both to oneself and to those around one, of actually adopting one."

[-][anonymous]9y00

That's a very interesting proposition. Lao Tzu would like to have a word with you.

[This comment is no longer endorsed by its author]Reply

If all the money (and effort) humanity currently wastes on making better killing machines was rerouted to causes that (currenly only) transhumanists and singularitarians - and the bulk of future humanity - care about, what would happen? What is the upper bound of funds above which adding more wouldn't make a difference? Where do you hit severely diminishing returns and have to say enough is enough?

Do you have a list "If only we had practically unlimited money we would..."? I'd like to see one.

After a great deal of navel-gazing, I have realised that I actually get pleasure from serving others. However, such pleasure need not be rescuing them from harm. If no-one is at threat from harm, it might be by entertaining them. My evolved highly social capacity to get pleasure from service will not be unsatisfiable. This can still be "all about me"- I get genuine pleasure, I get social interaction, I get increasing wisdom as I learn what works to "serve others" and what does not.

I might learn to accept what achievements I can make- not creating the perfect FAI, not even singing better than anyone else, merely singing something simple acceptably and giving transient pleasure to one other person.

The Zen arguments, spiritual growth, all that stuff is only relevant if one wants to be a Good person. I do not, I think I am good enough, already.

[-][anonymous]12y50

Higher Purpose =/= Altruism towards other people. At least not necessarily. I know this was implicitly mentioned in the whole "Truth or Art or Freedom" line, but considering that altruism connected with charity is one of the few places where we descend into signalling races here, I thought it important to emphasise that.

[-][anonymous]10y00

I'd rather an arbitrary person want to be a good person because it makes em feel good than spend all of eir free time and money on video games because they are fun. I think this post is too hard on people from the former group. After all, they're doing something good rather than something else!

"The external world is just a stream of victims for you to rescue." I do not see too a problem with this, as I agree with the last point that this not pure but still useful. Does a high purpose need to be "pure"? Are humans really completely capable of that? Does it make sense if humans are capable of that from a sociology perspective?

I do think a lot of high purposes come from adverse experiences, and by that adverse experience, such as rape/sexual assault, this person would understand a bit more of what the challenges are, how miserable it is, and it makes sense for this person to be personally invested in this clause and make contributions to it.