blossom comments on Torture vs. Dust Specks - Less Wrong

39 Post author: Eliezer_Yudkowsky 30 October 2007 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (596)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 25 March 2015 09:17:50PM 2 points [-]

It's not (necessarily) about dust specks accidentally leading to major accidents. But if you think that having a dust speck in your eye may be even slightly annoying (whether you consciously know that or not), the cost you have from having it fly into your eye is not zero.

Now something not zero multiplied by a sufficiently large number will necessarily be larger than the cost of one human being's life in torture.

Comment author: [deleted] 26 March 2015 10:20:11AM 2 points [-]

Now you are getting it copletely wrong. You can´t add up harm on spec dust if it is happening to different people. Every individual has a capability to recover from it. Think about it. With that logic it is worse to rip a hair from every living being in the universe than to nuke New York. If people in charge reasoned that way we might have harmageddon in no time.

Comment author: helltank 26 March 2015 10:56:01AM 1 point [-]

That's ridiculous. So mild pains don't count if they're done to many different people?

Let's give a more obvious example. It's better to kill one person than to amputate the right hands of 5000 people, because the total pain will be less.

Scaling down, we can say that it's better to amputate the right hands of 50,000 people than to torture one person to death, because the total pain will be less.

Keep repeating this in your head(see how consistent it feels, how it makes sense).

Now just extrapolate to the instance that it's better to have 3^^^3 people have dust specks in their eyes than to torture one person to death because the total pain will be less. The hair-ripping argument isn't good enough because pain.[ (people on earth) * (pain from hair rip) ] < pain.[(people in New York) * (pain of being nuked) ]. The math doesn't add up in your straw man example, unlike with the actual example given.

As a side note, you are also appealing to consequences.

Comment author: dxu 26 March 2015 04:09:18PM *  1 point [-]

[ (people on earth) * (pain from hair rip) ] < pain.[(people in New York) * (pain of being nuked) ]

I think Okeymaker was actually referring to all the people in the universe. While the number of "people" in the universe (defining a "person" as a conscious mind) isn't a known number, let's do as blossom does and assume Okeymaker was referring to the Level I multiverse. In that case, the calculation isn't nearly as clear-cut. (That being said, if I were considering a hypothetical like that, I would simply modus ponens Okeymaker's modus tollens and reply that I would prefer to nuke New York.)

Comment author: [deleted] 26 March 2015 11:03:16AM 1 point [-]

If

  1. Each human death has only finite cost. We sure act this way in our everyday lives, exchanging human lives for the convenience of driving around with cars etc.
  2. By our universe you do not mean only the observable universe, but include the level I multiverse

then yes, that is the whole point. A tiny amount of suffering multiplied by a sufficiently large number obviously is eventually larger than the fixed cost of nuking New York.

Unless you can tell my why my model for the costs of suffering distributed over multiple people is wrong, I don't see why I should change it. "I don't like the conclusions!!!" is not a valid objection.

If people in charge reasoned that way we might have harmageddon in no time.

If they ever justifiable start to reason that way, i.e. if they actually have the power to rip a hair from every living human being, I think we'll have larger problems than the potential nuking of New York.

Comment author: [deleted] 26 March 2015 12:27:28PM 0 points [-]

Okey, I was trying to learn from this post but now I see that I have to try to explain stuff myself in order for this communication to become useful. When It comes to pain it is hard to explain why one person´s great suffering is worse than many suffering very very little if you don´t understand it by yourself. So let us change the currency from pain to money.

Let´s say that you and me need to fund a large plantage of algae in order to let the Earth´s population escape starvation due to lack of food. This project is of great importence for the whole world so we can force anyone to become a sponsor and this is good because we need the money FAST. We work for the whole world (read: Earth) and we want to minimze the damages from our actions. This project is really expensive however... Should we:

a) Take one dollar from every person around the world with a minimum wage that can still afford house, food etc. even if we take that one dollar?

or should we

b) Take all the money (instantly) from Denmark and watch it break down in bakruptcy?

Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.

Comment author: [deleted] 26 March 2015 01:30:01PM 1 point [-]

In this case I do not disagree with you. The number of people on earth is simply not large enough.

But if you asked me whether to take money from 3^^^3 people compared to throwing Denmark into bankruptcy, I would choose the latter.

Math should override intuition. So unless you give me a model that you can convince me of that is more reasonable than adding up costs/utilities, I don't think you will change my mind.

Comment author: [deleted] 26 March 2015 02:14:26PM 2 points [-]

Now I see what is fundamentally wrong with the article and you´re reasoning from MY perspective. You don´t seem to understand the difference between a permanent sacriface and a temporary.

If we subsitute the spec dust with index fingers for example, I agree that it is reasonable to think that killing one person is far better than to have 3 billion (we don´t need 3^^^3 for this one) persons lose their index fingers. Because that is a permanent sacriface. At least for now we can´t have fingers grow out just like that. To get dust in your eye at the other hand, is only temporary. You will get over it real quick and forget all about it. But 50 years of torture is something that you will never fully heal from and it will ruin a persons life and cause permanent damage.

Comment author: Jiro 26 March 2015 03:51:09PM 2 points [-]

Asking me it is obvious that we don´t want Denmark to go bankrupt just because it may annoy some people that they have to sacriface 1 dollar.

The trouble is that there is a continuous sequence from

Take $1 from everyone

Take $1.01 from almost everyone

Take $1.02 from almost almost everyone

...

Take a lot of money from very few people (Denmark)

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly. You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. Can you say that?

Comment author: Lumifer 26 March 2015 04:42:44PM 1 point [-]

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

If you think that 100C water is hot and 0C water is cold, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

Comment author: dxu 26 March 2015 04:49:39PM *  0 points [-]

No, because temperature is (very close to) a continuum, whereas good/bad is a binary. To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad).

(Of course, it's not always clear which choice the answer is--hence why so many argue over it--but the answer has to be, in principle, either "yes" or "no".)

Comment author: Lumifer 26 March 2015 05:12:01PM *  4 points [-]

No, because temperature is (very close to) a continuum, whereas good/bad is a binary.

First, I'm not talking about temperature, but about categories "hot" and "cold".

Second, why in the world would good/bad be binary?

"Would an omniscient, moral person choose to take this action?"

I have no idea -- I don't know what an omniscient person (aka God) will do, and in any case the answer is likely to be "depends on which morality we are talking about".

Oh, and would an omniscient being call that water hot or cold?

Comment author: dxu 26 March 2015 05:33:37PM *  0 points [-]

First, I'm not talking about temperature, but about categories "hot" and "cold".

You'll need to define your terms for that, then. (And for the record, I don't use the words "hot" and "cold" exclusively; I also use terms like "warm" or "cool" or "this might be a great temperature for a swimming pool, but it's horrible for tea".)

Also, if you weren't talking about temperature, why bother mentioning degrees Celsius when talking about "hotness" and "coldness"? Clearly temperature has something to do with it, or else you wouldn't have mentioned it, right?

Second, why in the world would good/bad be binary?

Because you can always replace a question of goodness with the question "Would an omniscient, moral person choose to take this action?".

I have no idea -- I don't know what an omniscient person (aka God) will do,

Just because you have no idea what the answer could be doesn't mean the true answer can fall outside the possible space of answers. For instance, you can't answer the question of "Would an omniscient moral reasoner choose to take this action?" with something like "fish", because that falls outside of the answer space. In fact, there are only two possible answers: "yes" or "no". It might be one; it might be the other, but my original point was that the answer to the question is guaranteed to be either "yes or "no", and that holds true even if you don't know what the answer is.

the answer is likely to be "depends on which morality we are talking about"

There is only one "morality" as far as this discussion is concerned. There might be other "moralities" held by aliens or whatever, but the human CEV is just that: the human CEV. I don't care about what the Babyeaters think is "moral", or the Pebblesorters, or any other alien species you care to substitute--I am human, and so are the other participants in this discussion. The answer to the question "which morality are we talking about?" is presupposed by the context of the discussion. If this thread included, say, Clippy, then your answer would be a valid one (although even then, I'd rather talk game theory with Clippy than morality--it's far more likely to get me somewhere with him/her/it), but as it is, it just seems like a rather unsubtle attempt to dodge the question.

Comment author: Lumifer 26 March 2015 05:39:30PM *  1 point [-]

In fact, there are only two possible answers: "yes" or "no"

I don't think so.

You're making a circular argument -- good/bad is binary because there are only two possible states. I do not agree that there are only two possible states.

There is only one "morality" for the participants of this discussion.

Really? Either I'm not a participant in this discussion or you're wrong. See: a binary outcome :-D

but the human CEV is just that: the human CEV

I have no idea what the human CEV is and even whether such a thing is possible. I am familiar with the concept, but I have doubts about it's reality.

Comment author: Good_Burning_Plastic 26 March 2015 09:41:32PM 0 points [-]

To see this more clearly, you can replace the question, "Is this action good or bad?" to "Would an omniscient, moral person choose to take this action?", and you can instantly see the answer can only be "yes" (good) or "no" (bad).

By that definition, almost all actions are bad.

Also, why the heck do you think there exist words for "better" and "worse"?

Comment author: dxu 27 March 2015 12:30:55AM 0 points [-]

By that definition, almost all actions are bad.

True. I'm not sure why that matters, though. It seems trivially obvious to me that a random action selected out of the set of all possible actions would have an overwhelming probability of being bad. But most agents don't select actions randomly, so that doesn't seem to be a problem. After all, the key aspect of intelligence is that it allows you to it extremely tiny targets in configuration space; the fact that most configurations of particles don't give you a car doesn't prevent human engineers from making cars. Why would the fact that most actions are bad prevent you from choosing a good one?

Also, why the heck do you think there exist words for "better" and "worse"?

Those are relative terms, meant to compare one action to another. That doesn't mean you can't classify an action as "good" or "bad"; for instance, if I decided to randomly select and kill 10 people today, that would be a unilaterally bad action, even if it would theoretically be "worse" if I decided to kill 11 people instead of 10. The difference between the two is like the difference between asking "Is this number bigger than that number?" and "Is this number positive or negative?".

Comment author: Jiro 26 March 2015 07:23:20PM 1 point [-]

My opinion would change gradually between 100 degrees and 0 degrees. Either I would use qualifiers so that there is no abrupt transition, or else I would consider something to be hot in a set of situations and the size of that set would decrease gradually.

Comment author: dxu 26 March 2015 04:46:35PM 1 point [-]

You will have to say, for instance, taking $20 each from 1/20 the population of the world is good, but taking $20.01 each from slightly less than 1/10 the population of the world is bad. (emphasis mine)

Typo here?

Comment author: [deleted] 26 March 2015 07:27:58PM *  0 points [-]

YES because that is how economics work! You can´t take alot of money from ONE person without him getting poor but you CAN take money from alot of people without ruining them! Money is a circulating resource and just like pain you can recover form small losses after a time.

Comment author: [deleted] 26 March 2015 07:38:13PM 0 points [-]

If you think that taking $1 from everyone is okay, but taking a lot of money from Denmark is bad, then there is some point in the middle of this sequence where your opinion changes even though the numbers only change slightly.

I think my last response starting with YES got lost somehow, so I will clarify here. I don´t follow the sequence because I don´t know where the critical limit is. Why? Because the critical limit is depending on other factors which i can´t foresee. Read up on basic global economy. But YES, in theory I can take little money from everyone without ruining a single one of them since it balances out, but if I take alot of money form one person I make him poor. That is how economics work, you can recover from small losses easily while some are too big to ever recover form, hence why some banks go bankrupt sometimes. And pain is similar since I can recover from a dust speck in my eye, but not from being tortured for 50 years. The dust specks are not permanent sacrifaces. If they were, I agree that they could stack up.

Comment author: Jiro 26 March 2015 08:33:40PM 2 points [-]

I don´t follow the sequence because I don´t know where the critical limit is.

You may not know exactly where the limit is, but the point isn't that the limit is at some exact number, the point is that there is a limit. There's some point where your reasoning makes you go from good to bad even though the change is very small. Do you accept that such a limit exists, even though you may not know exactly where it is?

Comment author: [deleted] 26 March 2015 08:35:06PM 0 points [-]

Yes I do.

Comment author: Jiro 26 March 2015 09:04:03PM 2 points [-]

So you recognize that your original statement about $1 versus bankruptcy also forces you to make the same conclusion about $20.00 versus $20.01 (or whatever the actual number is, since you don't know it).

But making the conclusion about $20.00 versus $20.01 is much harder to justify. Can you justify it? You have to be able to, since it is implied by your original statement.

Comment author: [deleted] 26 March 2015 09:17:22PM *  0 points [-]

No I don´t have to make the same conclusion about 20.00 dollar versus 20.01. I left a safety margin when I said 1 dollar since I don´t want to follow the sequence but am very, very sure that 1 dollar is a safe number. I don´t know exactly how much I can risk taking from a random individual before I risk ruining him, but if I take only one dollar from a person who can afford a house and food, I am pretty safe.

Comment author: private_messaging 26 March 2015 10:28:25PM *  0 points [-]

Now, do you have any actual argument as to why the 'badness' function computed over a box containing two persons with a dust speck, is exactly twice the badness of a box containing one person with a dust speck, all the way up to very large numbers (when you may even have exhausted the number of possible distinct people) ?

I don't think you do. This is why this stuff strikes me as pseudomath. You don't even state your premises let alone justify them.

Comment author: [deleted] 26 March 2015 11:31:26PM 0 points [-]

You're right, I don't. And I do not really need it in this case.

What I need is a cost function C(e,n) - e is some event and n is the number of people being subjected to said event, i.e. everyone gets their own - where for ε > 0: C(e,n+m) > C(e,n) + ε for some m. I guess we can limit e to "torture for 50 years" and "dust specks" so this generally makes sense at all.

The reason why I would want to have such a cost function is because I believe that it should be more than infinitesimally worse for 3^^^^3 people to suffer than for 3^^^3 people to suffer. I don't think there should ever be a point where you can go "Meh, not much of a big deal, no matter how many more people suffer."

If however the number of possible distinct people should be finite - even after taking into account level II and level III multiverses - due to discreteness of space and discreteness of permitted physical constants, then yes, this is all null and void. But I currently have no particular reason to believe that there should be such a bound, while I do have reason to believe that permitted physical constants should be from a non-discrete set.

Comment author: private_messaging 27 March 2015 12:22:10PM *  -1 points [-]

Well, within the 3^^^3 people you have every single possible brain replicated a gazillion times already (there's only that many ways you can arrange the atoms in the volume of human head, sufficiently distinct as to be computing something subjectively different, after all, and the number of such arrangements is unimaginably smaller than 3^^^3 ).

I don't think that e.g. I must massively prioritize the happiness of a brain upload of me running on multiple redundant hardware (which subjectively feels the same as if it was running in one instance; it doesn't feel any stronger because there's more 'copies' of it running in perfect unison, it can't even tell the difference. It won't affect the subjective experience if the CPUs running the same computation are slightly physically different).

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

Seriously, you people (LW crowd in general) need to take more calculus or something before your mathematical intuitions become in any way relevant to anything whatsoever. It does feel intuitively that with your epsilon it's going to keep growing without a limit, but that's simply not true.

Comment author: [deleted] 27 March 2015 02:40:59PM 1 point [-]

I consider entities in computationally distinct universes to also be distinct entities, even if the arrangements of their neurons are the same. If I have an infinite (or sufficiently large) set of physical constants such that in those universes human beings could emerge, I will also have enough human beings.

edit: also again, pseudomath, because you could have C(dustspeck, n) = 1-1/(n+1) , your property holds but it is bounded, so if the c(torture, 1)=2 then you'll never exceed it with dust specks.

No. I will always find a larger number which is at least ε greater. I fixed ε before I talked about n,m. So I find numbers m1,m2,... such that C(dustspeck,m_j) > jε.

Besides which, even if I had somehow messed up, you're not here (I hope) to score easy points because my mathematical formalization is flawed when it is perfectly obvious where I want to go.

Comment author: private_messaging 27 March 2015 06:40:57PM *  0 points [-]

Well, in my view, some details of implementation of a computation are totally indiscernible 'from the inside' and thus make no difference to the subjective experiences, qualia, and the like.

I definitely don't care if there's 1 me, 3^^^3 copies of me, or 3^^^^3, or 3^^^^^^3 , or the actual infinity (as the physics of our universe would suggest), where the copies are what thinks and perceives everything exactly the same over the lifetime. I'm not sure how counting copies as distinct would cope with an infinity of copies anyway. You have a torture of inf persons vs dust specks in inf*3^^^3 persons, then what?

Albeit it would be quite hilarious to see if someone here picks up the idea and starts arguing that because they're 'important', there must be a lot of copies of them in the future, and thus they are rightfully an utility monster.