Comment author: polymathwannabe 05 September 2014 06:38:28PM 3 points [-]

Your original post says,

the logical conclusion is that we should completely destroy the universe, in a quick and painless manner

Would you please describe the sequence of thoughts leading to that conclusion?

Comment author: mgg 06 September 2014 08:55:14PM 2 points [-]

Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.

One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it's actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it "all averages out to normal", but tell that to someone in a hell branch.

The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.

Comment author: polymathwannabe 05 September 2014 02:21:05PM 3 points [-]

Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.

On one hand, there's the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there's what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.

The strong altruistic component of EY's philosophy is what sets it on a higher moral ground than Ayn Rand's. For all her support of reason, Rand's fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people's right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).

EY agrees with Rand's position that every mind should be free to improve itself, but he doesn't dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it's a useful strategy. I can't claim to divine his reasons, but the bottom line is that EY gets altruism.

(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)

Still, the question stands: Why care about random people? I notice it's difficult for me to verbalize this point because it's intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn't share that feeling.

Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.

Buddhism has a concept that I find highly appealing. It's called metta and it basically states that sentient beings' preference for not suffering is one you can readily agree with because you're a sentient being too. There are several ways to express the same idea in contemporary terms: We're all in this together, we're not so different, and other feel-good platitudes.

We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there's no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what's good for the entire world is good for you.

(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn't need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven't resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)

Comment author: mgg 05 September 2014 06:13:11PM 1 point [-]

But he views extinction-level events as "that much worse" than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there's no suffering left.

I'm not against others being happy and successful, and sure, that's better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family - if I could, I'd erase the entire lot of us, but it's just not practical.

Comment author: chaosmage 05 September 2014 12:54:40PM *  4 points [-]

You don't know that he does. You only know that he says he does. Also, MIRI needs your donations!

In all seriousness, it appears that he simply has a much larger circle of empathy than you do. Yours only includes yourself, children, family and friends, which sounds like (what Peter Singer has convincingly argued to be) the default setting that evolution presumably gave you a sense of empathy for because that'd promote the survival of your genes. But that circle can expand, and in fact it has tended to expand over the last couple of millenia. In Eliezer's case, it appears to include at least all humans. And why? Well, my suspicion is that people have a distaste for contradictions, and any arbitrary limit to empathy is inherently fraught with contradictions. ("Is it okay for a policeman to not care about you because you're not his friend?" "How many non-friends would you kill to save the life of a friend?" etc.) And maybe maybe Eliezer simply has a greater sensitivity to, and distaste for, contradictions than you do.

Comment author: mgg 05 September 2014 06:09:13PM 1 point [-]

This is something to think about, thanks.

What about the seeming preference for existence over non-existence? How do you morally justify keeping people around when there is so much suffering? In the specs versus torture, why not simply erase everyone?

Comment author: mgg 05 September 2014 05:07:11AM 6 points [-]

Why does Eliezer love me?

In many articles, EY mentions that Death is bad, as if it's some terminal value. That even the loss of me, is somehow negative for him. Why?

I've been thinking that it's Suffering that should be minimized, in general. Death is only painful for people because of the loss others suffer. Yes, the logical conclusion is that we should completely destroy the universe, in a quick and painless manner. The "painless" part is the catch, of course, and it may be so intractable as to render the entire thought pointless. (That is, we cannot achieve this, so might as well give up and focus on making things better.)

Even outside of Suffering, I still do not see why an arbitrary person is to be valued. Again, EY seems to have this as some terminal value. Why?

I love my children, I love my family, I love some friends. After that, I don't really care all that much about individuals, except to the extent that I'd prefer them to not suffer. I certainly don't feel their existence alone is something that valuable, intrinsically.

Am I wicked or something? Am I missing some basic reasoning? I see my viewpoint may be viewed as "negative utilitarian", but I haven't come across anything in particular that makes such a position less desirable.

View more: Prev